Month: July 2019

Everything You Should Know About E-A-T

Last updated on

Expertise, Authority, and Trust, or better known as E-A-T is now a famous standard that SEOs try to follow. Two of the biggest algorithm updates from the past year gave more attention to E-A-T, the Medic Update on August 2018 and the Google June Broad Core Update on June 2019.

Many of the websites that were hit were said to have low E-A-T. When asked how can they recover, you’ll probably get the same answer from experts and even from Google’s Search Liaison Danny Sullivan over and over again.

Great content. Something that everyone is trying to do but a few is able to achieve. When someone says “create great content”, it’s honestly not much of an advice, but in this tweet from Danny Sullivan in 2018, he referred to the Google Quality Raters Guidelines that talked about what we call today as E-A-T.

In this article, I’ll be taking a deep dive into E-A-T to better understand what really does Google mean by great content and how do they judge it and a bit of historical background to how it all started.

Google+ AuthorRank

Do you remember AuthorRank? If you’ve been doing SEO since 2012, then you’ve probably heard of it.

Back in 2013, I wrote an article on AuthorRank. It was the time when lots of false and spam content was circulating around the internet and Google had to take steps to better identify real content from real authors. At the same time, Google also released Google Authorship where the name and icon of the author appear along in the search results.

AuthorRank pulled up data of webmasters from their Google+ profile. Before, you can add a short authorship markup rel=”author” where you can link your Google+ profile to the content you publish on your website. It was a popular on site optimization factor before. But as you know, Google+ has shutdown but if you are interested in checking this out, you could take the time machine and read this post on Google Authorship Markup in 2012.

So why did I bring this up? In my opinion, AuthorRank is one of Google’s stepping stone in coming up with E-A-T. It touches on the importance of having a good online reputation, far different from technical SEO factors like links and meta tags. At least we know that E-A-T is a long time coming. Since the boom of AuthorRank in 2012, Google has been trying to find a better correlation between good content and a good author. 

There were different ways to measure AuthorRank before. One of which was on how many +1 you received in Google+ and how many people add you in their Circles. Social media presence was also a factor and of course, backlinks.

Google ended their 3-year experiment with Authorship in 2014 but many said that AuthorRank is still alive.

Google Medic Update

In early August 2018, just a month after Google announced the revised version of the QRG, a huge core algorithm update was released. Many websites reported that they were hit hard. Most of those are health/medical websites hence the name Google Medic Update.

Panic was in the air. Webmasters here and there were asking “how can we recover?”. But apparently, there was no “fix”. Then, E-A-T took the spotlight.

Marie Haynes, an expert Google Algorithm Changes and E-A-T, pointed out the qualities of the website that were hit does not adhere to E-A-T standards. In her example, an author named Bridget Montgomery wrote a review of Top 10 Glucose Meters. However, when you go to the author’s profile, there were no information that the author is an expert in the medical field, just a list of articles she published.

Even though many websites were hit, Marie also pointed out that there were websites in the same niche that had these qualities have gained:

  1. Clear identity of authors of the articles
  2. Main purpose of website content is to inform not to sell
  3. Lots of signs of authority such as comments and reviews

Because of correlations such as these, people started theorizing that E-A-T is a ranking factor. Many counted out this theory, however, in another post by Marie Haynes, she pointed out that a whitepaper that Google published in February 2019 on their fight against disinformation confirms that E-A-T is indeed a ranking factor.

Google Quality Raters Guidelines

In 2015, Google released the full version of the Quality Raters Guidelines or QRG that contain the criteria in evaluating a website hence called the “Google Quality Raters Guidelines”. It was revised in June 2018 and was last updated on May 2019. 

To help evaluate each website on its index, Google contracted more than 10,000 employees to evaluate pages that appear in the search results especially those in the top pages. They are called the “Quality Raters”.

Quality Raters cannot directly affect a website’s rankings. When a page is given a low-quality score, it does not lose all of its rankings and traffic overnight. However, the data from Quality Raters are used for further improvements in Google’s search algorithm.

The QRG taught Quality Raters on the factors they need to consider when judging a page’s Page Quality (PQ). It talks about reputation, quality of a page’s main content, samples of low-quality pages, and of course, the E-A-T.

In section 3.2, the QRG first mentioned E-A-T. It said that websites or pages the don’t have any purpose or benefits for users should receive the lowest Page Quality (PQ) rating. However, for those websites or pages that provide value to users, the amount of E-A-T is important and Quality Raters should consider the following:

  • The expertise of the creator of the Main Content (MC)
  • The authoritativeness of the creator of the MC, the MC itself, and the website
  • The trustworthiness of the creator of the MC, the MC itself, and the website

Optimizing for E-A-T

Now that we know what E-A-T is and the importance of it in today’s SEO landscape, the next question is, how do you optimize for it?

The first thing you should know is that there is no way to know what your E-A-T rating is or if your website has been rated by a Quality Rater. What you could do is continuously build your reputation. Here are a few steps you could do:

  • Author Profile

To Google, high-quality content must come from an expert of the topic an author is writing about. There are topics that only specific experts in certain fields can write and provide valuable information about.

In section 3.2 of the QRG, it stated a few examples of this:

  • High E-A-T medical advice should  be written or produced by people or organizations with appropriate medical expertise or accreditation.
  • High E-A-T news article should be produced with journalistic professionalism
  • High E-A-T information pages on scientific topics should be produced by people or organizations with appropriate scientific expertise and represent a well-established scientific consensus on issues where such consensus exists

From these bullets alone, it’s easy to say that before you write for a certain topic, you should be an accredited expert on that matter. Author pages or Author bios play a huge role in this. It should be clear in your profile your field of expertise and links to your other social profiles should be included.

 

  • Links and mentions from authority websites

Links are and always have been a measure of authority. It is a no brainer that to be considered authoritative, you need links. Also, note that it’s not just about anchor texts. When other websites are citing your name or your business or website’s brand as a source in an informative article from a relevant website, your website is gaining more authority.

While links are ideal, do not count out unlinked mentions. Being mentioned in news articles or other articles also count. Yes, you could still take your time to let the webmaster know that they didn’t link to your website when they mentioned your name, but it still gives you value anyway.

You could use tools like Google Alert that will notify you if your name or brand is mentioned around. You could also use Google Trends to monitor how your brand is doing.

 

  • Reviews

Reviews play a huge role in establishing trust in your website. In section 2.6.1 of the QRG, it says that Quality Raters should do research when rating a website to look for what real people think of it. 

“Look for reviews, references, recommendations by experts, news articles, and other credible information created/written by individuals about the website.”

If you have a business, make sure it is listed in legitimate local directories where you can ask clients and other users for reviews. Your number 1 target should be getting a verified Google My Business listing.

Key Takeaway

Moving forward, E-A-T will play a huge role in Google’s future algorithm changes knowing that the data Quality Raters produce are used for algorithm improvements. While you can never know how your website is evaluated, following these guidelines should be part of your foundational SEO strategy.

Everything You Should Know About E-A-T was originally posted by Video And Blog Marketing

The 21 Psychological Elements that Power Effective Web Design (Part 1)

“We must ground our webpage designs in the customer’s psychology or risk losing business.”

— Flint McGlaughlin

(This article was originally published in the MarketingExperiments email newsletter.)

Think about your current landing page and ask yourself, “If I were to take my headline and put it on several other brands’ webpages, would it still apply to other businesses? In other words, is my landing page headline generic or specific?

Let’s take the following headline from a healthcare webpage as an example: “WE WANT TO HELP YOU”

How can we change this headline to make it more specific and unique to the services being offered?

This is just one of many thought exercises Flint McGlaughlin and his participants worked through together in a recent interactive session on web design.

Watch the replay to get insight into important page elements like determining your ideal customer — demographic vs. decision profiling, and the dangers of having more than one objective on your page.

Next week, Flint will continue the discussion, sharing tips on good page flow (layout), personality (look and feel) and customer connection.

You can download a copy of the infographic and use it as a template for creating your own web designs grounded in customer psychology. And we hope you’ll join us in the following weeks to come on YouTube Live as we show you how to use this framework to think about your page systematically in order to achieve consistent conversion lifts.

Related Resources

3:31 Website Development: How a small natural foods CPG company increased revenue 18% with a site redesign

7:36 Landing Page Optimization: How Aetna’s HealthSpire startup generated 638% more leads for its call center

13:12 How a Nonprofit Leveraged a Value Proposition Workshop to See a 136% Increase in Conversions

The Marketer as Philosopher Book: 40 brief reflections on the power of your value proposition by Flint McGlaughlin

The 21 Psychological Elements that Power Effective Web Design (Part 1) was originally posted by Video And Blog Marketing

SEO Courses: The Best SEO Training Options in 2019 (Free and Paid)

SEO

If you’re going to be managing an SEO campaign, it’s best you have some formal training in the art of search engine optimization.

While it’s not necessary to major specifically in SEO at a university, you should at least be seeking some higher education, specifically digital marketing certification courses and advanced training classes.

Careers in the SEO industry can be lucrative, and the industry has a demand for experienced professionals who are trained in the subtle content tweaks that it takes to raise a website up from obscurity into the world of page one visibility.

But why do you need to take a course in the first place? Can’t you just learn on the job? Is it truly important to invest in SEO training? And if you do want to train, what are some of the best free and paid courses that you can take to further your knowledge?

Why Beginners Should Take SEO Courses

Much like any technical profession, SEO requires a lot of training. It is no longer the new kid on the marketing block (like social media), and it has become essential for content marketers to have a working knowledge of SEO and how it works.

That’s because SEO is a complicated and evolving process which combines elements of content writing with web design to create a layered and multi-faceted campaign that must be constantly managed and nurtured.

If you’re looking to have a career in the SEO industry, you need to develop an intense focus on Google’s search algorithm and how it changes over time. Google is constantly shifting its parameters, so as an SEO professional, it’s up to you to adapt the content of your clients to ensure that you’re doing everything you can to maintain and enrich their ranking.

You have to understand how Google judges sites and how to stay ahead of changes the company makes.

This is true whether you’re new yourself or you have a new team member joining your company who needs more training.

To that end, training courses are perfect for beginners to learn practical knowledge and test themselves before being thrust into a position they’re not prepared for.

Failing to provide accurate training for an SEO specialist could lead to major mistakes being made which negatively impact the overall campaign. Unfortunately, the penalty for such an occurrence is usually that the person is terminated.

Remember, every job involves extensive training whether you’re a plumber, WordPress developer, or a Wall Street executive. Why should a process that has become the backbone of the marketing industry be any different?

The Importance of Investing in Training

When looking at the world of search engine optimization, it’s easy to question why you should have to invest your own money into training classes. When you break down the numbers, you’ll find that paying into an SEO future can be a very smart investment based solely on your return.

SEO courses vary in costs (more on that in a moment.) However, most of them are either free or cost up to $1,000. While the latter figure might make your legs wobble, it’s important to look at the industry as a whole before making a snap judgment.

SEO Specialists are in high demand in today’s market. The industry is booming, with a 43% increase in SEO jobs from 2017 to 2018. The better the education you have, the more likely you are to land a high paying SEO job.

How high paying?

The average base salary of an SEO specialist is reported by Glassdoor to be $65,200 per year. That means if you get a good job in the SEO industry, you can make your initial coursework investment back in less than one week.

The Top SEO Training Courses of 2019

Now that you understand why you should be taking a training course to enrich your SEO career journey, it’s time to figure out what kind of courses are out there.

These courses vary in price, starting at no cost whatsoever all the way up to expensive university classes. We will go through each of these chosen educational pathways to ensure that you are making a decision that fits your needs and your budget.

Moz: SEO Training Course

Moz’s SEO training course is filled with a lot of advanced knowledge. It is primarily a beginner course offering two paid plans. 

The first and more expensive option is a full SEO fundamentals certification course priced at $599. It provides foundational knowledge which prepares students for common SEO projects. It is led by instructors. These lessons focus on keyword research, page optimization, link building, and reporting. 

As far as the workload and time commitment, this course features six hours of video content broken into five lessons, along with quizzes and a final exam. 

You can easily tout your successful completion of Moz’s SEO course because you will receive a certificate of completion at the end of your work. This makes it far easier to show a potential employer and makes a great resume attachment. 

The other option is priced at $199 and it focuses on local SEO training. It focuses on six components of local SEO strategy, teaching students how to prioritize optimizations on a local level. It does this by covering how to audit web pages and business listings in order to be listed on map results. The course provides a Local SEO checklist that can be tailored to future clients on an individual basis. 

The course is self paced, meaning students have the ability to work at their own speed. All purchases grant access to the video course and materials for a period of one year. 

The reviews for Moz’s SEO Training Course are mostly positive. 

As for accessibility, Moz’s course can be completed on mobile platforms, including iPhone, iPad, or any Android device.

ClickMinded

ClickMinded is a premium SEO course that is priced at $487. The difference between ClickMinded and many of the free services on this list becomes apparent when you look at the large amount of content that it offers.

ClickMinded’s SEO course includes more than five hours of content split between 66 different lectures. That’s a lot of course time, making for a complete educational experience.

On top of explaining the basics of SEO to you, ClickMinded will also walk you through the use of 14 different SEO tools.

There are almost 3,000 students enrolled in the program, and it gets great reviews all across the board. ClickMinded fits in well with a busy schedule as it is a self-paced course. There is no need to worry about wasting your money as the course comes with a 30-day money back guarantee. 

That means an SEO beginner who starts the course work and then determines that this is not the career path for them can opt out and receive a full refund within the first 30 days. 

Like most of the courses on this list, you can get a certificate of completion at the end.

SEO Certification Training by MarketMotive

MarketMotive’s SEO Certification Training program entails 25 Hours of learning. It also features educational materials including downloadable workbooks. What’s more, the course is updated on a quarterly basis to reflect the current state of the SEO industry, so you’re not getting fed any outdated information.

There are periodic quizzes designed to test your knowledge throughout the course, and it comes with a seven-day money back guarantee.

Speaking of money, MarketMotive’s course comes in two differently priced tiers. The self-paced version of this course is offered at $599. You can also take a live online classroom version of the training program at $999.

Another great feature which gives you a lot of bang for your buck is unlimited support in a responsive “ask the instructor” format. This program ensures that you’ll never sit there confused with no one to help you.

At the end of your training, you will receive a certificate of completion to prove that you took this course.

University of San Francisco – Advanced Search Engine Marketing Certificate Program

This is the only program on this list that is offered from an actual university. This is a certificate program offered through the University of San Francisco, and as such, it is the most expensive item on this list at $1,980 for one course.

The course features eight different SEO related topics, each one composed of six different lectures, meaning you’re getting a lot of content for such a hefty price tag.

Coursework for this program also includes preparation for the Google Ads Certification exam, which is a huge deal in the marketing world. Receiving Adwords certification is a massive feather in your cap and makes you more appealing to businesses, not only to manage their SEO efforts but also to run Google Ads on a pay-per-click basis.

Online Marketing Institute: SEO Certification

Online Marketing Institute offers an SEO Certification program that is composed of five different parts.

On top of coursework, a final Exam is available after all of the classes have been completed. Your knowledge is constantly being tested in this program with a short quiz which follows each lesson.

This is another certificate program, providing proof of completion at the end.

The online marketing institute offers two different ways that you can pay for this service. The first is a monthly fee of $67, or if you have the money upfront, you can pay a lower cost of $859 one time for the entire course. This is perfect for students who don’t have as much disposable income on hand, allowing them to pay as they go until the course is completed.

While you are left to your own devices to complete the course, there is a time limit in place. You are given a period of six months to complete all of the classes and the final exam immediately following your enrollment.

It should also be noted that these classes all vary in terms of timing. Each class is somewhere between 20 and 70 minutes in length.

Google Search Engine Optimization Starter Guide

This is a free SEO guide offered up by Google. Note that it is described as a guide and not a course. That’s because this is an educational chapter-based tutorial and not an actual course with classroom work.

Google gives you 12 “chapters” to read through in order to enrich your knowledge of the SEO process and how it works.

They go into detail explaining the process offering a very basic take on what SEO is and how you can get started on optimizing content.

It should also be noted that this tutorial covers Google only, so if you’re looking to optimize content for Amazon, Apple, Bing, or Yahoo, you’ll have to look elsewhere.

As this is not a course, there is no certificate of completion offered at the end and no testing. The entire tutorial can be completed in one sitting. While this is not the best route for forging an education in the SEO field, it is a good starting point. What’s more, you’re being educated by Google itself, which is huge.

It is highly recommended that SEO specialists review this content, even as an extra boost to their education following the completion of a certification program.

QuickSprout

QuickSprout is another tutorial, not an actual certification course. As such, there is no completion certificate at the end of it.

It is a free tutorial with Neil Patel serving as one of the authors. This is a good continuation tutorial for people who are ready to move beyond the basics of SEO. It covers a lot of the more advanced topics like link building and aspects of technical SEO.

The QuickSprout Tutorial features nine different chapters that are written in a very conversational and easy to understand tone. It is highly recommended for anyone looking to deepen their knowledge following a certification program.

Technical SEO is becoming more important as websites become more advanced. Link building has always been one of the most complicated aspects of modern SEO so its best to get as much training and information on it as you can.

In Conclusion

Industry experts all agree that a career in SEO is a positive move for young professionals. To better your chances at landing a lucrative SEO career, you should invest in a certification course or, at the very least, consider going through some of the free tutorials mentioned in this article.

Doing so could be the first step on your journey to a fulfilling career that is in demand.

SEO Courses: The Best SEO Training Options in 2019 (Free and Paid) was originally posted by Video And Blog Marketing

A/B TESTING SUMMIT 2019 KEYNOTE: Transformative discoveries from 73 marketing experiments to help you increase conversion

(This article was originally published in the MarketingExperiments email newsletter.)

THIS WEDNESDAY, July 24th, 1 p.m. EDT: Join our live, interactive session on YouTube as Flint McGlaughlin discusses The 21 Psychological Elements that Power Effective Web Design (Part II)

Recently, we published 73 before-and-after examples of webpages that MECLABS analysts and designers optimized using qualitative research and A/B testing tools.

Our CEO and founder, Flint McGlaughlin, was a keynote speaker for the recent A/B Testing Summit, and he shared some of those examples with the attendees, pointing out where changes were made on a page, and more importantly — why those changes were made.

Alongside those examples, McGlaughlin offered ideas for creating attention-grabbing headlines, writing effective body copy, call-to-action tips, form errors to avoid, the importance of understanding a customer’s eye path and more.

He also shared three simple principles you should remember when optimizing your pages for maximum conversion.

If you weren’t able to attend the summit, we wanted to provide Flint’s session to our audience here. Watch the replay below to embark on a thought journey that will help you reconceptualize the work of daily marketing tasks.

Related Resources

The Zen of Headline Writing
The End of Web Design: Don’t design for the web, design for the mind
Ask MarketingExperiments: How Do Qualitative Research, Design Thinking, And Design Sprints Relate To A/B Testing?

A/B TESTING SUMMIT 2019 KEYNOTE: Transformative discoveries from 73 marketing experiments to help you increase conversion was originally posted by Video And Blog Marketing

What is a Ranking Factor?

I decided to write this for a couple of reasons. One is that I’ve seen a lot of potentially misleading Tweets on the subject recently (naming no names!), and the other is that it’s related to another pet peeve of mine, about ranking factor studies.

What is a ranking factor?

A ranking factor is a variable that a search engine uses to decide the best ordering of relevant, indexed results returned for a search query.

Note that I’ve said the decision is between relevant, indexed pages – a good illustration of this distinction is the often absurdly high number shown beneath your query when you perform a Google search, such as the 643 million “potatoes”-related pages shown here:

Most of these pages are not particularly relevant, but this is the set that ranking factors are seeking to order. Some of the factors used to establish what is relevant to include in this list are also ranking factors (for example, having links with the anchor text “potatoes”), but they are not the same thing.

Perhaps the most famous ranking factor is Google’s PageRank – invented at a time when, proportionately, a great deal more web browsing was done by clicking links from popular pages, the role of PageRank was to approximate the popularity, and therefore, by extension, authority, of a page on the internet.

How are ranking factors combined?

A search algorithm might take ranking factors like PageRank and weight, sum, or multiply them in any way seen fit. The objective is to combine them in a way that achieves the “best” results – for example, presenting the results that users are most likely to click on at the top. According to this CNBC interview from late 2017, metrics Google might optimise for include time to SERP interaction and rate of bouncing back to search results.

Amusingly, probably the best insight we’ve had into how Google combines ranking factors came from a question about featured snippets, asked to Google’s Gary Illyes by Jason Barnard earlier this year. I say amusingly because Gary seemingly had no need to give such an in depth answer to this question, but the model he describes is fascinating to explore and extend upon – you can read more about it in Jason’s article here.

What ranking factors are there?

We don’t know! In fact, we don’t even know how many there are. Probably a great many. (Although, not everything that influences rankings is a ranking factor – more on that below!)

Google occasionally explicitly confirms a ranking factor (like HTTPS, or page loading times), but often this is as much as anything to push the SEO industry to change the internet in a direction they’d like. 

It’s in their interest to keep actual ranking factors and their relative importance very close to their chest, as their algorithm is part of their advantage over their competitors.

Is user experience a ranking factor?

Not exactly, no – “user experience” is not a metric. However, we do know (for example, from that CNBC article referenced above) that many of the things Google is looking for correlate with a good user experience. We also know that Google highlights things like slow loading times or excessively small font on mobile devices as SEO issues, which suggests an interest in this kind of factor.

There’s an ongoing debate about whether Google tries to measure this directly (for example, by looking at the click through rate of specific search results, compared to what might be expected, and adjusting their ranking on the fly), or whether they merely adjust their algorithm to look for things that correlate with user experience improvements. Many real world experiments suggest the former, but Google’s official line is the latter.

It’s in their interest to keep actual ranking factors and their relative importance very close to their chest, as their algorithm is part of their advantage over their competitors.

Misconceptions

Misconception 1: All metrics that correlate with rankings are ranking factors

This misconception is probably in some small part my fault – and also the fault of others who, like me, have published so called ranking factor studies. These are actually correlation studies – we can look at what qualities are typically held by well-ranking pages, but this does not mean that those qualities are necessarily ranking factors. Facebook Likes, for example, correlate well with Google rankings. There are a few possible explanations for this:

  • Sheer coincidence (unlikely – that’s the point of statistical significance thresholds – but possible)
  • Pages that rank well tend to be seen a lot, therefore end up getting a lot of Facebook Likes (i.e. the causation is in the opposite direction to Facebook Likes being a ranking factor)
  • Pages on popular websites receive both many Facebook Likes and strong Google rankings (i.e. there is something else that causes both, rather than one influencing the other)
  • Google assesses the number of Facebook Likes that results have, and takes this into account when ordering them in search results (i.e. Facebook Likes are a ranking factor).

Want more posts like this in your inbox? Join the monthly newsletter.

Misconception 2: A ranking factors is anything that causally affects rankings

If you engage in link building, you are engaging in an activity that is designed to influence a ranking factor – something you believe Google will consider directly in their algorithm. However, there are lots of things you can do to make your site rank better, but which are not in themselves designed to influence ranking factors.

Perhaps my ultimate claim to fame is that I used to work at the UK’s busiest Little Chef (a now-defunct chain of roadside grills), where I was a cook. If my boss had sent me on a training course, this may have resulted in a few things that could go on to improve the business’s rankings in Google Search, such as:

  • Glowing coverage in local press, due to the restaurant becoming known locally for its excellent food
  • Bloggers mentioning and linking to the Little Chef website, having been pleasantly surprised at their experience
  • People clicking on the site even when it’s position 8 in the search results, because they know and love the brand, having had so many wonderful meals there

Sadly, my boss did not send me on a training course, and Little Chef eventually died, their premises being ignominiously absorbed into the empire of Starbucks. However, that does not mean that “kitchen staff competence level” is a ranking factor. It is not something that Google is attempting to directly measure and include as a variable in its algorithm, therefore it is not a ranking factor.

However, if you are running an ailing roadside grill, and you wish to improve your rankings, you could try having competent kitchen staff. The causal link is there, even if the ranking factors are involved only indirectly.

Misconception 3: Ranking factors are dead / don’t exist

I’d be the first to say that ranking factors may not be a helpful for SEOs anymore. In fact, I’ve written about it on this very blog.

Because ranking factors are so many and so unknowable, it’s often better to aim for what Google is optimising for, therefore avoiding the need for any Kremlinology.

However, that does not mean that ranking factors are not a thing – they are still a crucial part of how search engines work at a very basic level and understanding the theory gives you a solid foundation to your knowledge.

Misconception 4: Bounce rate, time on page, and/or conversion rate are ranking factors

These are metrics in analytics, which means Google does not have access to them – even if you use Google Analytics. Furthermore, they’re easily manipulated, and often don’t obviously correlate with good outcomes.

For example, Google might be interested in the rate at which I return to search results after clicking on your site – this would indicate I was unhappy with that result. However, they can get this information directly from their own analytics on search results, and the bounce rate in your analytics could be misleading – if I read your page, get the answer I want, and move on in my life, that’s a good search result from Google’s perspective, but probably a bounce in your analytics.

Discussion

It’s theoretically possible that blog comments are a ranking factor, so please let me know your thoughts in the space below 🙂

What is a Ranking Factor? was originally posted by Video And Blog Marketing

Why Your Page’s Word Count Isn’t as Important as You Might Think

Last updated on

Cover Photo - Why Your Page's Word Count isn't as Important as you might think

Here’s the formula: the longer your page’s word count is, the more chances it has to be successful in search. That’s a common belief or statement in the SEO industry. That might have been true 2 to 3 years ago, but does it still hold effective up to today – amidst all the confirmed and unconfirmed algorithm updates rolled out by Google?

Updates such as Medic, January broad core, March broad core, (announced) June broad core, etc. have targeted a variety of factors. Most of them point to authority, expertise, trustworthiness, or better known as E-A-T. Of course, the belief stems from the idea that the more words you have in your blog post, the more information it could contain. But the problem there is if the information contained is the one that the users are looking for when they input a specific search query/keyword?

Data on Long Form Content

Last 2016, Backlinko partnered with Clickstream to produce an awesome article called “We Analyzed 1 Million Google Search Results. Here’s What We Learned About SEO”. It’s a great read and I suggest you read it too since it’s a useful resource to understand how search engines (Google) rank our pages. However, one key finding that took the SEO industry by surprise is this one:

Backlinko Article Screenshot

Since SEOs are adept at adjusting and adapting to sudden algorithm and foundational strategy changes, we all increased the word count of our pages to increase our chances of ranking on the first page. 

Here’s the problem:

That was way back in 2016

3 years worth of algorithm changes, updates, and new factors have happened and I can’t really expect this to work as great as it did back in 2016.

If the date of the article seems to be the problem, then here’s another study done by Ahrefs back in 2018 which says the same thing. 

Ahrefs Blog Post Screenshot

When SEOs read studies like these, we can’t help but think that:

Longer Content = Better Rankings

But that isn’t necessarily true. I believe that longer content has higher backlink acquisition potential than shorter content and we all know how important backlinks are for SEO. To support this, Backlinko with Buzzsumo conducted another study that showed:

Backlinko long form content backlink acquisition screenshot

So it’s safe to conclude that having longer content doesn’t automatically give you better rankings. It’s about the backlink (as most of us experience) that the long form content gets. But there’s still something missing for me with regards to the problem of long form content.

Understanding User Intent and the Keywords They Use

For businesses, being visible in the search market is an important marketing tactic that enables them to improve awareness of their brand, increase lead generation and conversions. Which, for us SEOs, mean that we have to give our clients (the businesses) tangible results that benefit them. We produce long form content for our clients that help them be more visible but not produce any tangible results that they’re happy with is still a dissatisfied client. This is where user intent, specifically, buyer keywords come into play. We have to make sure that our content (it doesn’t matter if its long form or short form) targets the right intent.

There are three buyer keywords we have to be mindful of:

  • Informational Keywords – Users look for information to help them deepen their understanding of a certain product, need, service.
  • Navigational Keywords – Users look for specific brands, products, services that help them satisfy their needs.
  • Transactional Keywords – Users look for places to buy the specific brand, product, or service that they’re looking for. 

You have to know which keywords your target audience use and structure your content in a way that it answers their questions. Your long form content will be useless if it’s just full of flowery and useless words that do not help users in any way. 

With the updates that happened and the experiences we’ve had recently, I believe that satisfying or answering user intent has become an important factor for ranking well (as opposed to the past). It’s not necessarily about who puts out the longest content, but it’s about the ones that are able to satisfy the user’s search needs.

You can put out a 5,000 word article talking about anything and everything about a topic that spans multiple keywords, and you’ll see pages with shorter content that focuses on answering a keyword’s intent rank better than you. 

Key Takeaway

This article isn’t to disparage the usefulness of long-form content. The purpose of this article is to realign my fellow SEO’s focus on what really matters. It’s not about longer content but it’s about being useful to the users. If you can answer a user’s query through 500 words, then don’t waste your time writing a 2000 word article. Here’s a statement by Google about word count.

Just remember that word count is not enough to make you rank well. But making an article that is backlink-worthy, that satisfies intent, and is useful for your audience is the best way to make your content rank better.

Why Your Page’s Word Count Isn’t as Important as You Might Think was originally posted by Video And Blog Marketing

The Do’s and Don’ts of Internal Linking

Last updated on

Internal linking; one of the most important SEO strategies but is often overlooked and misunderstood. It is also one of the easiest things to do but it is more than meets the eye. 

When I talk about internal linking, people sometimes roll their eyes and think that it’s just as simple as making sure your new post has links to old posts that are related to it but I can guarantee you that internal linking plays a vital role in on-page SEO.

How Important is Internal Linking?

Internal linking helps establish a website’s structure. It helps search engine crawlers and users to define what are the important pages on your website. It helps users to navigate easily through your website. Internal linking also helps pass on PageRank or link authority inside the website. That means the more internal link a page has, the more authority it has.

While internal links are less important as external links or better known as backlinks, you should still give priority as it is both a link building and on-page SEO strategy. Success in internal linking is just as important as success in link building.

Do’s

  • Plan your Content for a Better Internal Linking Structure

Planning your content and laying out the list of topics you plan to write about helps you know what articles can be internally linked to this article. I published an article on Topic Cluster Model a few months ago and it is a great way of making sure your website’s internal linking structure is not messed up.

Following the Topic Cluster Model, the best practice is to first look for a broad keyword that is related to your site’s niche and create a deep and long article about it called “Pillar Content”.

For each Pillar Content you publish, think about related sub-topics related to those that talk about more specific and comprehensive topics called “Cluster Content”. Make sure that when you publish Cluster Content, it is linked to the Pillar Content and vice versa.

  • Link Old Posts to New Posts

This is one of the most common internal linking mistakes. If you’re scratching your head wondering why your new blog post is not ranking well, this is probably the reason why.

What usually happens is when a new article is published, people would internal link to old posts but would never go back to old posts and internal link to the new ones. Make sure to go back to old posts and link to newer posts to get a good flow of authority and help new posts rank.

  • Place Important Pages in the Navigation Bar

I think it’s pretty obvious and needless to say that navigation bar or navigation menu links are considered internal links. Important pages such as landing pages, contact page, and FAQs page should be placed there.

Users should be able to go to these pages easily. A good rule of thumb is the most important pages can be accessed in 1 click. The more clicks a page needs to get to, the less important it is for search engines.

Navigation bar links are also sitewide links so you’ll get a good flow of authority to the pages in it.

 

Don’ts

  • Don’t Overoptimize your Anchor Texts

Anchor texts play a huge role both for external and internal links. It is one of the signals Google uses to determine what is a webpage about. If you want a page to rank for a specific keyword, you would want that to be the anchor text if you place an internal link.

However, you should never overoptimize your anchor texts. For example, if you want a page to rank for “best car tires” and you internal link 100 times to that page with the same anchor text, that would look spammy.

Make it look natural and don’t force it. It’s nice to use the keyword as the anchor text but only do so when the opportunity presents itself as long as it would help the user understand and navigate. Sometimes, “Click Here” would serve a better purpose than forcing your target keyword.

  • Don’t Link Too Much

Don’t get all crazy and place an internal link to every sentence or paragraph on a page. A good practice is to limit links on a page to 100 links and that includes all links like outbound links, ads, navigation bar, footer links, etc.

Again, only link when the opportunity presents itself.

  • Don’t Orphan Pages

When a page is published and it has no links pointing to it, it becomes orphaned and will make it hard for Google to find this page.

Just like what I mentioned in the Do’s part. Don’t just link new posts to old posts. Make sure that new posts are linked to.

  • Don’t link to Page B using the same Anchor Text you want Page A to rank for

This might be a little bit confusing. If you want Page B to rank for “Best Car Tires”, don’t use “Best Car Tires” if you are linking to Page A. This isn’t a usual mistake but it happens.

Anchor Texts are more powerful they seem. This alone can cause keyword cannibalization. So just like I mentioned, plan out your content.

Bonus Tips

  • What will happen if the same URL is linked to twice?

Authority will still be passed on to that URL. However, only the first anchor text will be used by Google. This isn’t usually a problem and applies both to external and internal links.

Check out this link illustration from Moz:

  • Check Internal Links Through Search Console or Ahrefs

If you want to check the pages with the most and least internal links in your website, you could do so using Ahrefs Internal Backlink tool or the Link Report in Google Search Console. Ahrefs is best if you want to see the internal links of a single page while GSC’s Link Report is best to quickly see an overview of the internal link number of all pages.

If you sort by pages with most internal links and you see your landing pages after scrolling a while, then you have some work to do.

  • Should I use nofollow or dofollow for internal links?

There is no general rule on this. However, it goes without saying that you should use dofollow links when internal linking to make sure that PageRank flows healthily throughout your website.

Key Takeaway

While backlinks are superior, internal links should never be counted out. A messed up internal linking strategy can be the one that is keeping you from ranking better. Focus on building backlinks on your most important pages and spread the love all over your website.

Take note: It’s not just for Google, it’s also for users. A great internal linking strategy can keep readers engaged to your content and then ultimately, convert.

The Do’s and Don’ts of Internal Linking was originally posted by Video And Blog Marketing

How would Google Answer Vague Questions in Queries?

“How Long is Harry Potter?” is asked in a diagram from a Google Patent. The answer is unlikely to do with a dimension related to the fictional character but may have something to do with one of the best selling books featuring Harry Potter as a main Character.

When questions are asked as queries at Google, sometimes they aren’t asked clearly, with enough preciseness to make an answer easy to provide. How do vague questions get answered?

Question answering seems to be a common topic in Google Patents recently. I wrote about one not long ago in the post, How Google May Handle Question Answering when Facts are Missing

So this post is also on question answering but involves issues involving the questions rather than the answers. And particularly vague questions.

Early in the description for a recently granted Google Patent, we see this line, which is the focus of the patent:

Some queries may indicate that the user is searching for a particular fact to answer a question reflected in the query.

I’ve written a few posts about Google working on answering questions, and it is good seeing more information about that topic being published in a new patent. As I have noted, this one focuses upon when questions asking for facts may be vague:

When a question-and-answer (Q&A) system receives a query, such as in the search context, the system must interpret the query, determine whether to respond, and if so, select one or more answers with which to respond. Not all queries may be received in the form of a question, and some queries might be vague or ambiguous.

The patent provides an example query for “Washington’s age.”

Washington’s Age could be referring to:

  • President George Washington
  • Actor Denzel Washington
  • The state of Washington
  • Washington D.C.

For the Q&A system to work correctly, it would have to decide which the searcher who typed that into a search box the query was likely interested in finding the age of. Trying that query, Google decided that I was interested in George Washington:

Answering vague questions

The problem that this patent is intended to resolve is captured in this line from the summary of the patent:

The techniques described in this paper describe systems and methods for determining whether to respond to a query with one or more factual answers, including how to rank multiple candidate topics and answers in a way that indicates the most likely interpretation(s) of a query.

How would Google potentially resolve this problem?

It would likely start by trying to identify one or more candidate topics from a query. It may try to generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic.

It would obtain search results based on the query, which references an annotated resource, which would be is a resource that, based on automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource. For each candidate topic-answer pair,

There would be a Determination of a score for the candidate topic-answer pair based on:

(i) The candidate topic appearing in the annotations of the resources referenced by one or more of the search results
(ii) The query answer appearing in annotations of the resources referenced by the search results, or in the resources referenced by the search results.

A decision would also be made on whether to respond to the query, with one or more answers from the candidate topic-answer pairs, based on the scores for each.

Topic-Answer Scores

The patent tells us about some optional features as well.

  1. The scores for the candidate topic-answer pairs would have to meet a predetermined threshold
  2. This process may decide to not respond to the query with any of the candidate topic answer pairs
  3. One or More of the highest-scoring topic-answer pairs might be shown
  4. An topic-answer might be selected from one of a number of interconnected nodes of a graph
  5. The Score for the topic-answer pair may also be based upon a respective query relevance score of the search results that include annotations in which the candidate topic occurs
  6. The score to the topic-answer paid may also be based upon a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, which could indicate the likelihood that the answer is correct for that question
  7. ,/ol>

    Knowledge Graph Connection to Vague Questions?

    vague answers answered with Knowledge base

    This question-answering system can include a knowledge repository which includes a number of topics, each of which includes attributes and associated values for those attributes.

    It may use a mapping module to identify one or more candidate topics from the topics in the knowledge repository, which may be determined to relate to a possible subject of the query.

    An answer generator may generate for each candidate topic, a candidate topic-answer pair that includes:

    (i) the candidate topic, and
    (ii) an answer to the query for the candidate topic, wherein the answer for each candidate topic is identified from information in the knowledge repository.

    A search engine may return search results based on the query, which can reference an annotated resource. A resource that, based on an automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource.

    A score may be generated for each candidate topic-answer pair based on:

    (i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results
    (ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results. A front-end system at the one or more computing devices can determine whether to respond to the query with one or more answers from the candidate topic-answer pairs, based on the scores.

    The additional features above for topic-answers appears to be repeated in this knowledge repository approach:

  1. The front end system can determine whether to respond to the query based on a comparison of one or more of the scores to a predetermined threshold
  2. Each of the number of topics that in the knowledge repository can be represented by a node in a graph of interconnected nodes
  3. The returned search results can be associated with a respective query relevance score and the score can be determined by the scoring module for each candidate topic-answer pair based on the query relevance scores of one or more of the search results that reference an annotated resource in which the candidate topic occurs
  4. For one or more of the candidate topic-answer pairs, the score can be further based on a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, or each of one or more annotations in which the answer in a respective candidate topic-answer pair occurs

Advantages of this Vague Questions Approach

  1. Candidate responses to the query can be scored so that a Q&A system or method can determine whether to provide a response to the query.
  2. If the query is not asking a question or none of the candidate answers are sufficiently relevant to the query, then no response may be provided
  3. The techniques described herein can interpret a vague or ambiguous query and provide a response that is most likely to be relevant to what a user desired in submitting the query.

This patent about answering vague questions is:

Determining question and answer alternatives
Inventors: David Smith, Engin Cinar Sahin and George Andrei Mihaila
Assignee: Google Inc.
US Patent: 10,346,415
Granted: July 9, 2019
Filed: April 1, 2016

Abstract

A computer-implemented method can include identifying one or more candidate topics from a query. The method can generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic. The method can obtain search results based on the query, wherein one or more of the search results references an annotated resource. For each candidate topic-answer pair, the method can determine a score for the candidate topic-answer pair for use in determining a response to the query, based on (i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results, and (ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results.

Vague Questions Takeaways

I am reminded of a 2005 Google Blog post called Just the Facts, Fast when this patent tells us that sometimes it is “most helpful to a user to respond directly with one of more facts that answer a question determined to be relevant to a query.”

The different factors that might be used to determine which answer to show if an answer is shown, includes a confidence level, which may be confidence that an answer to a question is correct. That reminds me of the association scores of attributes related to entities that I wrote about in Google Shows Us How It Uses Entity Extractions for Knowledge Graphs. That patent told us that those association scores for entity attributes might be generated over the corpus of web documents as Googlebot crawled pages extracting entity information, so those confidence levels might be built into the knowledge graph for attributes that may be topic-answers for a question answering query.

A webpage that is relevant for such a query, and that an answer might be taken from may be used as an annotation for a displayed answer in search results.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

How would Google Answer Vague Questions in Queries? was originally posted by Video And Blog Marketing

How to Do Change Detection with Screaming Frog and Google Sheets

I made a Google Sheet that does change detection for you based on two Screaming Frog crawls. I’ll tell you why that’s important. 

Two problems frequently come up for SEOs, regardless of if we’re in-house or external.

  1. Knowing when someone else has made key changes to the site
  2. Keeping a record of specific changes we made to the site, and when.

Both can sound trivial, but unnoticed changes to a site can undo months of hard work and, particularly with large e-commerce sites, it’s often necessary to update internal links, on-page text, and external plugins in search of the best possible performance. That doesn’t just go for SEO, it applies just as much to CRO and Dev teams.

Keeping a record of even just our changes can be really time-consuming but without it, we often have to rely on just remembering what we did when, particularly when we see a pattern of changing traffic or rankings and want to know what might have caused it. 

These things are people problems. When we can’t rely on other teams to work with us on their planned changes, that needs to be fixed at a team level. When we don’t have a system for listing the changes we make it’s understandable, particularly for smaller keyword or linking tweaks, but if we compare ourselves to a Dev team for example – a record of changes is exactly the kind of thing we’d expect them to just include in their process. At the end of the day, when we don’t keep track of what we doing that’s because we either don’t have the time or don’t have the commitment to a process. 

We shouldn’t really be trying to fix people problems with tools. That said, people problems are hard. Sometimes you just need a way of staying on top of things while you fight all the good fights. That’s exactly what this is for. 

This is a way to highlight the changes other teams have made to key pages, so you can quickly take action if needed, and to keep track of what you’ve done in case you need to undo it.

As a completely separate use-case, you can also use this sheet to check for differences between different versions of your site. Say, for the sake of argument, that you need to know the difference between the mobile and desktop versions of your site, or your site with and without JavaScript rendering, or even the differences between your live site and a private developer version you’re about to release. There are tools that offer change detection and cover some of the functions of this sheet, but I really like the flexibility this offers to check for changes between versions as well as over time.

Screaming Frog Change Detection Google Sheet.
Access for free.

What sites is this good for?

This sheet is for anyone who needs an idea of what is changing on a fairly large number of pages but can’t afford to pay for big, expensive change detection systems. It’ll work its way through around 1,000 key pages. 

That said, 1,000 key pages stretches further than you would think. For many small sites, that’ll more than cover all the pages you care about and even larger eCommerce sites get the vast majority of their ranking potential through a smaller number category pages. You would be surprised how big a site can get before more than 1,000 category pages are needed. 

That 1,000 URL limit is a guideline, this sheet can probably stretch a bit further than that, it’s just going to start taking quite a while for it to process all of the formulas.

So what changes does it detect?

This Google Sheet looks at your “new crawl” and “old crawl” data and gives you tabs for each of the following;

  • Newly found pages – any URL in the new crawl that isn’t in the old crawl
  • Newly lost pages – any URL in the old crawl that isn’t in the new crawl
  • Indexation changes – i.e. Any URL which is now canonicalised or was noindexed
  • Status code changes – i.e. Any URL which was redirected but is now code 200
  • URL-level Title Tag or Meta Description changes
  • URL-level H1 or H2 changes
  • Any keywords that are newly added or missing sitewide.

What’s that about keyword change detection?

On many sites, we’re targeting keywords in multiple places at a time. Often we would like to have a clear idea of exactly what we’re targeting where but that’s not always possible.

The thing is, as we said, your pages keep changing – you keep changing them. When we update titles, meta descriptions and H1s we’re not checking every page on the site to confirm our keyword coverage. It’s quite easy to miss that we are removing some, middlingly important, keyword from the site completely. 

Thanks to a custom function, the Google sheet splits apart all of your title tags, meta descriptions, and H#s into their component words and finds any that, as of the last crawl, have either been newly added, or removed from the site completely.

It then looks the freshly removed words up against Search Console data to find all the searches you were getting clicks from before, to give you an idea of what you might be missing out on now.

The fact that it’s checking across all your pages means you don’t end up with a bunch of stopwords in the list (stopwords being; it, and, but, then etc.) and you don’t have to worry about branded terms being pulled through either – it’s very unlikely that you’ll completely remove your brand name from all of your title tags and meta descriptions by accident, and if you do that’s probably something you’d want to know about.

How do I use it?

Start by accessing a copy of this Google Sheet so you can edit it. There are step-by-step instructions in the first tab but broadly all you need to do is;

  1. Run a Screaming Frog crawl of all the pages you want to detect changes on
  2. Wait a bit (like a couple of weeks) or crawl the mobile, JavaScript, or dev version right away for comparison
  3. Run another SF crawl of the pages you want to detect changes on
  4. Export the internal_all report for both crawls and paste them into the “old crawl” and “new crawl” tabs respectively
  5. Wait a bit (like 30 minutes)
  6. Check the results tabs for changes
  7. (Optional) Import Search Console data to give “value lost” information for keywords you removed.

What do you think?

I hope you find this useful! I was really surprised by what Google Sheets was able to achieve, is there anything you think I’ve missed? Anything you would change?

How to Do Change Detection with Screaming Frog and Google Sheets was originally posted by Video And Blog Marketing

Google Reveals How They Keep Search Fresh and Relevant

Last updated on

Google Reveals How They Keep Search Fresh and Relevant

Public Liaison for Search, Danny Sullivan, recently released an article explaining the process of implementing changes to the Google Algorithms. Providing an in-depth overview of how changes reflect in search gives us a further understanding of how these updates roll out for every query.

Moreover, it pretty much sums up how Google works as an informational vessel for users as Sullivan extends his article to a deeper analysis of SERP features like the knowledge graph, featured snippets, and predictive features.

“Our search algorithms are processed with complex math equations that rely on numerous variables, and last year alone, we made more than 3,200 changes to our search systems,” according to Danny Sullivan. The regular updates that the search engine uses in order to display relevant and useful results on the web are done with utmost and precise evaluation.

For SEOs, it is important to work with Google and not against them. I believe that the most useful white-hat tactic for SEO is following the search engine’s guidelines on their processes and algorithms. This is a great look into their process of displaying the search results and how they deal with potential issues that come with the SERP features. How does Google keep the search useful and relevant? See more of it below:

Utilizing Unique Formats

featured snippet

People can be visual in nature which is why it proves to be very helpful for them if they come across a search engine result in the form of featured snippets. As you may well know, featured snippets, or also popularly known as “Rank Zero”, determines that a particular web page passes Google’s standard both in quality and credibility. Appearing as a featured snippet is a signal of a strong web page that is most likely to be displayed in that unique format if it has undergone the best SEO.

Danny Sullivan reiterates that they prohibit snippets that are in potential danger of violating their policies like harmful, violent, hateful, sexually explicit or content that lacks an expert authority regarding public interest topics.

They did not mention a particular implementation which puts a page in the position of Rank Zero, but they did say that the removal of a featured snippet is only possible if it violates policies. So it is safe to say that as long as your content is relevant enough to serve the general public, then you are good to go.

Providing Factual and Relevant Information

Google Search aims to reflect the facts that people readily search for on the web. No matter who, what, or where, the whole purpose of their algorithmic implementations is to provide answers that will sustain the queries that users input every day. The Knowledge Graph is their way to automatically connect the attributes of this information to structured databases, licenses data, and other informational sources.

Changes to the Knowledge Graph are rare since it is utilized to display results that are factual and properly presented as well. One of the processes that Sullivan explains in his article is the usage of manual verification and user feedback. Updating information in the event of an error is done manually especially if their systems have not self-corrected it.

Knowledge graphs can look as simple as this:

knowledge graph

Or it can be displayed as a full-blown knowledge panel with a collection of the most viable answers to your query:

knowledge panel2

Google also gives people and organizations the freedom to edit their knowledge panels as they see fit. This process ensures that content authority is maintained and to pursue accuracy in representing people, places, things, or services. Additionally, Google has tools and developments that are specifically dedicated to fixing errors in Knowledge Graphs and other SERP features as well.

Maintaining A Personal Approach to Search

predictive features

Predictive texts are everywhere and it is the closest and most basic form of AI that we can encounter in the digital world. Simply put, the predictive features help users navigate searches more quickly. Common searches help expand the topic scope, which will prove to be very useful for user experience and search for information. The section in the search labeled “People also search for,” will help users navigate to other topics that are related to what they are looking for in the first place.

people also search for

Google maintains the freshness of the predictive feature by ensuring that the related searches would not have a negative impact on groups and this is also applied to results which can be offensive to the users. Possible predictions that are not included in this feature can be policy-violating content, either because it is reported and found to violate the policy or the system detected it as such.

Ranking Content by Relevance and Authority

organic results

SERP features are great but you cannot deny that the most coveted spot in the search results pages is the organic result. Although the features are helpful enough, organic listings are what keeps people hooked into the search engine. The “blue links” are the lifeblood of search engines, businesses, and SEO organizations.

Ranking the results is an automated process, it sifts through a hundred billion pages that Google has indexed while crawling the web and it is arranged according to the relevance from the keywords you use.

Although Google’s ranking factors can be pretty vague, this recently published article made one thing clear: they do not manually remove or re-order organic results from the search engine page. The search engine does not manually intervene on a particular search query in order to “address ranking challenges”.

This is a bold statement considering that there are many people coming for Google who they think have a certain bias in the domains it ranks for the organic listings.

Eliminating Spam in Search

Spam protections are vital, especially if the content fails to meet Google’s long-standing webmaster guidelines. This is not limited to spam only since it can also cover violations that deal with malware and dangerous sites. The search engine’s spam protection systems are automated to prevent these types of content from being included in the ranking system. This is also applicable to links who are too spammy for their own good.

If those SEOs who think that using black-hat tactics are working in their favor, they would have to think again because Google sees these types. Manual actions are in order if the automated system fails to detect these dangerous content. It is also stated that this is not targeted to a particular search result or query, rather it is applicable to all general content that users can find in the SERPs.

Adherence to Legal Policies

The search pages are meant to disseminate information and to provide an avenue of safe browsing for everyone. Unfortunately, broad access to information also means that these content are vulnerable unsafe attacks like child abuse imagery and copyright infringement claims. These cases violate Google’s purpose of being a melting pot of information since it intervenes with people’s protection against these sensitive content.

Google’s legal compliance and application is their public commitment to keep people safe in order to help them strive to make search results and features useful to everyone.

Key Takeaway

Google’s transparency in the way their processes go about it is a win for SEOs. This will help us come up with ethical strategies with this type of information to keep in mind. Optimizing sites is not meant to be an overnight job. There is hard work behind strategies, applications, and research which means that there is no such thing as an outrage claim that your site can rank in just 24 hours.

Same with Google, they work hard to devise these features and to monitor the search engine pages to the best of their abilities. Rather than making an enemy out of Google, why not be its ally in providing users with the most relevant content and best experience? Read the full article here.

Do you think there are more areas that Google has to work on? Comment down below!

Google Reveals How They Keep Search Fresh and Relevant was originally posted by Video And Blog Marketing