Google: Robots.txt to Become an Official Standard After 25 Years

Last updated on

Google- Robots.txt Now an Official Standard After 25 Years

In a series of tweets by Google Webmasters, they have announced their proposal of a draft stating that Robots.txt is well on its way from becoming a de facto standard to an official one.

To quote Martijn, “This is especially handy if you have large archives, CGI scripts with massive URL subtrees, temporary information, or you simply don’t want to serve robots.” The mastermind of the initial standard saw fit that Robots.txt should come to light when he noticed that crawlers started going into his site in an overwhelming manner.

25 years later, this is still true today. Who knew that injecting a simple text file into your server can make bots easily see the content that you would want to serve to users? Precisely what SEO is all about.

Telling the bots what pages to access and index on your website makes it simpler for you to become visible in the SERPs. I should know since I make sure to properly implement the Robots Exclusion Protocol (REP) for sites. As one of the general and vital components of the web, it should be a cause of alarm if you are not familiar with REP and Robots.txt today.

At this point, we all learned all there is to know about Robots.txt but what does it mean to have it as an official standard?

Clear Implementation of Robots.txt

Through the Twitter account of Google Webmasters (@googlewmc), Google provided a series of tweets starting with reminiscing about the situation of web crawlers about overwhelming servers back in 1994, invoking the mention of Martijn Koster’s proposal of the protocol to control URL crawlers.

Google Webmaster Tweet

Aside from the original 1994 Standard for Robot Exclusion document, Koster’s 1996 historical description about the method for web robots control is the resource acknowledging the submission of the Robots Exclusion Protocol as an Internet Draft specification. The internet isn’t as developed in the 90s as it is now, so having the opportunity for webmasters to control the way their content is accessed is a pretty big deal back then.

Back in 1996, it was regarded as a “work in progress” and I think it still is, given that there are webmasters who are puzzled by how the process truly works. Transitioning from the ambiguous de facto standard means that open-ended interpretations will come to an end. Even though the new proposal would not change any rules created since 1994, it would bring clarity to the “undefined scenarios for robots.txt parsing and matching,” according to Google.

Clear Implementation of Robots

“The proposed REP draft reflects over 20 years of real-world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP. These fine-grained controls give the publisher the power to decide what they’d like to be crawled on their site and potentially shown to interested users.”

Search engines have fully utilized the use of the REP but there are still some areas that haven’t been covered, which is why the proposed draft of the standardization will hopefully bring about a clearer explanation on the way Robots.txt works. Google, together with webmasters, other search engines and the proponent of the REP Specification, submitting a proposal to the Internet Engineering Task Force (IETF) means that this is a significant effort to extend robots exclusion mechanism as it can now be governed by a technical standard body.

Further Innovations from Webmasters

Together with the announcement to make the REP an internet standard, Google also considered the work of developers in parsing robots.txt files. Google’s robots.txt Parser is now an open source through their C++ library. You can find the robots.txt parser in Github, and they have also included a testing tool as part of the open source package.

C++ library Google

With over 20 years of overseeing how webmasters create robots.txt files, this supplemented the internet draft passed to the IETF. This means that search engines are readily available to help web creators to experiment and innovate on the net; all for the purpose of creating unique and engaging content for better user experience.

The active development of the protocol just means that there will be further updates for the modern web. Again, they would not be making changes to the established rules for the robots.txt. The updated rules can be seen below:

  1. Any URI based transfer protocol can use robots.txt. It would not be limited to HTTP anymore. Additionally, it can be used for FTP or CoAP as well.
  2. Developers must parse at least the first 500 kibibytes of a robots.txt. The act of defining a maximum file size highlights that the connections are not open for too long, alleviating unnecessary strain on servers.
  3. A new maximum caching time of 24 hours or cache directive value if available, this will give website owners the flexibility to update their robots.txt whenever they want, as crawlers aren’t overloading websites with robots.txt requests at the same time.
  4. Disallowed pages are not crawled for a reasonable long period of time when the robots.txt file becomes inaccessible due to server failure.

In addition to that, they have also updated the augmented Backus-Naur form which is also included in the internet draft which will better define the syntax of the .txt file, this is a move that can help developers parse the lines accordingly.

A Challenge to Robots.txt Creation

Google partners are ecstatic over this development because the research and implementation of the protocol is no joke.

Gary Illyes

This initiative has been well-researched with over 20 years of data to back it up so it only makes sense that webmasters follow the lead and make the Internet a better place through the protocol. One thing that is notable to mention is that the draft of the proposal states that the crawlers should allow special characters.

With this, it should be a call to action to webmasters to be careful about the values they encode in the .txt file. There are instances when typos prevent the crawlers from understanding the command from the webmaster.

Hopefully, having standard rules for creating robots.txt will embolden site masters to be vigilant in creating the protocol for their site. Robots.txt has been used in over 500 million websites which is why it is important that you give optimal attention to it.

Key Takeaway

Robots.txt specifications have also been updated to match the REP draft submitted to the IETF. I suggest that you should read up and focus on making a clean Robots.txt file for your site. Controlling search engine crawlers to achieve better indexing and rankings can only go so far if you do not do it right.

As you innovate and make your site the most optimal it is, learn about the mechanisms and how to make it work in your favor. What are your thoughts on this recent draft to make the Robots Exclusion Protocol an official standard?

Google: Robots.txt to Become an Official Standard After 25 Years was originally posted by Video And Blog Marketing

How to Fix Index Coverage Errors in Google Search Console

Last updated on

Indexing and crawling are two highly important processes for websites and search engines. For a website to appear in the search results, it must first be crawled by a search engine bot and then it will be queued for indexing. As an SEO, it is important that you get your website crawled and indexed and make sure that there are no errors that might affect how your website appears in the search results.

Google Search Console is the SEO community’s best friend. It allows us to submit our websites to Google and let them know of our website’s existence. It is the tool that allows us to see through the eyes of Google. We can immediately see what pages are being shown in the search results or if the changes or improvements we did have reflected.

One of the best things about the Google Search Console is that it shows us indexing errors that might negatively affect a website’s ranking. Search Console’s Coverage report will show you all the pages Google is indexing based on the sitemap you submitted as well as other pages not submitted in your sitemap but was crawled. Fixing errors to these pages is crucial.

An important page in your website that has an error would probably rank low if Google is finding a hard time crawling and indexing it. That is why it is crucial that you know what are the errors found in Search Console’s Index Coverage and know how to fix them.

Server Error 5xx

5xx Errors happen when a website’s server can’t handle or process a request that was made by google-bot when it was crawling the page. Not only does this error cause problems with your website crawling, but your users are also having a hard time accessing your website too.

5xx Errors are usually caused by a problem with your server. It might be down, overloaded, or misconfigured. It could also be caused by a problem in your web site’s DNS configuration or content management system. 

To fix this problem, it would be best to consult your web developer or check if your hosting has a problem.

Redirect Error

Redirects are normal to any website. It’s being used to redirect old pages or posts that may not be useful anymore to new ones. It could also be used to redirect URLs that are not found anymore.

URLs should only have a single 301 redirect. When a URL is redirected to another URL that is also redirected to another one, it creates a redirect chain and that is the usual problem that causes this error.

Redirect errors happen if a redirect chain is too long, it is a loop, it reached the maximum redirect limit (for Chrome it’s 20 redirects), or one of the URLs in the chain is empty.

Make sure that all your redirects are pointing to live URLs and only use a 301 redirect once to avoid redirect chains.

Submitted URL Blocked by Robots.txt

URLs that are submitted in your website’s sitemap indicates that these URLs are important and they must be crawled and indexed. If some of those URLs are also blocked in your robots.txt file, it will cause confusion for Google bot.

To fix this error, check first if the URLs you are blocking are important pages or not. If these pages are important and are accidentally blocked in your robots.txt file, simply update it and remove those URLs from the file. Make sure that those URLs are not blocked anymore using the robots.txt Tester at the old Google Search Console version.

If you purposely blocked a URL that is submitted in your sitemap, remove that URL from your sitemap. If you’re using WordPress, get first the page or post number of the URL you are removing from the sitemap. To do that, go to Posts or Pages and click Edit to the post or page you want to remove. Check the URL bar and you will see the post ID.

Get that post ID and go to your sitemap settings. I’m using Google XML Sitemaps and I find it really easy to use. Under Excluded Items, you will find Excluded Posts. Enter the post ID of the one you want to exclude from the sitemap and click on save.

 

 

Submitted URL Marked ‘noindex’

This error is similar to the Submitted URL blocked by Robots.txt Error. Since a URL submitted in the sitemap means you want Google to index it, placing a ‘noindex’ tag to it makes no sense.

Check if those URLs are important pages. Placing a ‘noindex’ tag means you don’t want Google to show these pages in the search results. If a product or landing page has an accidental ‘noindex’ tag, then that is bad news for you.

If the URLs under the error are not important anymore, remove them from the sitemap similar to how I mentioned it above.

If the URLs are important, remove the noindex tag from them. If you’re using Yoast SEO, go to the Page or Post that is tagged as noindex. Scroll down until you see the Yoast SEO box and click on the Gear Icon.

Make sure that the option under Allow Search Engines to Show this Post in Search Results? Should be “Yes”. 

If you’re using SEO Ultimate, the process is similar to it. Go to the post or page then scroll down to the SEO Ultimate box. Under Miscellaneous make sure that the “Noindex” box is unchecked.

Submitted URL Seems to be a Soft 404

A soft 404 error means a URL that is submitted in your sitemap is no longer existing but is returning as a success or code 200. Soft 404 errors are both bad for users and the google-bot.

Since it is still considered a page, users might see this page on the search results but all they will see is a blank page. At the same time, it will waste your crawl budget.

Check the URLs that Google considers as soft 404. If those pages were deleted or non-existent, make sure that they return a 404 (not found) error. But if they are still relevant, use a 301 redirect to a live page.

Submitted URL Returns Unauthorized Request (401)

A 401 Error happens when a submitted URL is going to be crawled by Google but it deemed Google unauthorized. This usually happens when webmasters place security measures for other bad bots or spammers. To fix this error, you need to run a DNS lookup and verify Googlebot.

Submitted URL Not Found (404)

A page that is returning a 404 error means that the page is deleted or not existing. Most of the time, if you delete a post or a page, it is automatically removed from the sitemap. However, some errors might happen and a URL that is deleted might still be found on your sitemap.

If that page is still existing but has moved to another page, then doing a 301 redirect would fix the error. For content that is deleted permanently, then leaving it as 404 is not a problem.

Take note that redirecting 404s to the homepage or other pages that are not related to it could be problematic for both users and Google.

Submitted URL has crawl Issue

Submitted URLs that are under this error means there is an unspecified error that does not fall in the other mentioned errors that stop Google from crawling the URL.

Use the URL Inspection tool to get further information on how Google sees that web page and make improvements from there. 

Warning: Indexed Though Blocked  by Robots.txt

This is not an Error but a Warning. It is the only category that falls under the Warning tab of the Coverage report. This happens when a URL is blocked by Robots.txt is still being indexed by Google.

Usually, Google respects the Robots.txt but when a URL that is disallowed is linked to internally, Google could still crawl that disallowed URL.

The noindex tag and robots.txt file have very different uses. There are still some confusions between them. If it is your intention to remove these URLs from the search results, remove them from the robots.txt file so Google could crawl the ‘noindex’ tag on them. The robots.txt file is more used to control your crawl budget.

Key Takeaway

Checking for errors in your Search Console should be a part of your SEO routine. Always optimize all your pages. Make sure that these pages can be crawled and indexed by Google with no problem at all. While submitting a sitemap does not directly affect your rankings, it will help Google notify you of any errors that might negatively impact your website’s rankings.

 

How to Fix Index Coverage Errors in Google Search Console was originally posted by Video And Blog Marketing

Google Scholar: Indexation & Ranking

In this post, we will be looking at Google Scholar, how to optimise for it and how this Google platform can contradict traditional SEO practices.

Previously, my knowledge of Google Scholar was limited to my university experience and so when I was handed the task of improving indexation and ranking for a client in Google Scholar,  I didn’t know where to start. I took to Google and my fellow Distillers to enlighten me. I bombarded John Mueller at Brighton SEO in hope of some directional guidance. There wasn’t a nice guide out there on how to improve presence and rankings in Google Scholar. Really,  it was down to me to figure this one out.

And so, I embarked on my exploration of the world of academic search engine optimisation (also known as ASEO).

What is Google Scholar?

Google Scholar is Google’s academic platform that allows researchers to publish papers and facilitates academic research and learning.  It publishes various sources of academic research, for example, books, journal articles, reports, universities and professional societies. It indexes scholarly articles from across the web bringing them all into one convenient place with related works, additional citations and author information.

For academic sites, publishers and researchers, Google Scholar is key to traffic performance. This platform requires traditional SEO optimisation balanced with Google Scholar specific optimisations.

Current visibility on Google Scholar

Unlike with standard SEO, there aren’t any magical tools out there for Google Scholar with ranking/indexation data. 

The “site:” operator can provide a ballpark figure for indexation. It is important to note that using this method is not 100% accurate but if you want to compare your relative visibility to competitors, it can provide some insight. 

For example, below we have two academic sites in Google Scholar. Using the “site:” operative, we can see oup.com has approximately 1,770,00 results indexed whereas, plos.org have 252,000 results indexed. This is a relative comparison and not fully representative of indexation in Google Scholar.

oup.com

Primary Technical SEO Checks

Before getting started with Google Scholar optimisations, ensure that your site is healthy from a technical SEO perspective. I won’t delve into all the technical checks in this blog post, you can use our checklist to technical SEO and read more about specific elements from our blog network.

Some fundamental SEO hygiene issues that are key to Google Scholar’s indexation:

  • Canonical tags
    • These should be used to associate old versions of an article and duplicate content.
    • See this post from Moz for more information.
  • Metadata 
    • Metadata should be optimised for a given article or portal page.
    • Ensure your site does not have duplicated metadata or is missing metadata. Metadata is a clear signal to Google as to what a page is about.
    • Yoast has a nice post which takes you through key metadata elements.
  • H1
    • Similar to metadata, ensure H1s are unique and specific to a page. There should only ever be one H1 on a page.
  • Sitespeed
    • Google Scholar has that “an overly slow response from your website” can negatively impact indexation. You can test a site’s speed using Google’s Pagespeed Insights tool.
    • This guide from Moz explains what page speed is and provides some recommendations for SEO.

It is crucial for a site to be in a healthy state for standard Google search before it can tackle Google Scholar – this is worth the time investment.

How to optimise for Google Scholar?

Content

Google Scholar will only consider indexing content that is scholarly in nature. This could include:

  • Articles
  • Research papers
  • Conference papers
  • Technical reports
  • Abstracts
  • Dissertations

Either the full text of the article or the abstract has to be accessible to users and robots from the URL displayed in Google Search results. Access to the article/abstract needs to be free of charge, without a login and interstitial ads and/or further click-throughs. 

Google Scholar can index both PDFs and HTML pages. In the case of PDFs, the abstract (or the entire article) should be accessible on the HTML version of that PDF.

Quick SEO Reminders

  • Place each article (along with associated abstract)  in its own, unique HTML page or PDF file.

Crawlability 

As with standard Google Search, it is pertinent that Google Scholar can crawl your site, find article pages and decipher important content. 

To ensure article pages can be reached, it is necessary to have a browser interface. The architecture of the site should ensure each page can be reached from the homepage using internal links. This can be done by:

  • Having author-specific pages listing all articles written by a given author
  • Ordering articles by dates
  • Using tags to group articles by topics or specific keywords

Find out more about the importance of a site’s architecture and how to conduct an information architecture audit.

Quick SEO Reminders

  • Ensure crawlers have access to the site and article pages. Checking robots.txt file would be a good starting point.

Indexation: Scholarly Meta Tags

It is key that Google Scholar can identify bibliographic data for indexation. Citations are an influential ranking factor (which will be discussed later in this article) and therefore, Google Scholar needs to also understand the references made between articles. This is done using academic meta tags.

What are Scholarly Meta Tags?

Unlike standard SEO, scholarly articles require a number of academic specific HTML <meta> tags in the HTML source code of a page that enables Google Scholar to extract bibliographic data. These are equivalent to meta tags that are used in standard SEO (e.g <meta name=”description” content=”…”>).

There are various meta-tag schemes that these meta tags can be presented in, each format essentially does the same thing, what is important is that key information has been marked up by these tags for Google to identify – read more about these tags in Google’s guide. 

Google Scholar accepts the following formats:

  • Highwire Press (citation_title)
  • Eprints (eprints.title)
  • BE Press  (bepress_citation_title)
  • PRISM (prism.title)
  • Dublin Core (DC.title)

These tags, at least, should be used  for:

  • The title tag (citation_title) 
  • The author tag (citation_author)
  • Publication date (citation_publication_date)
  • An associated file such as a PDF file  (citation_pdf_url)

Throughout my research, I came across multiple academic sites using more than one meta tag format. For example, below we can see that Nature.com are using Highwire Press, PRISM and Dublin Core tags within the source code. 

Nature.com

At present there doesn’t seem to be a one size fits all solution on the is matter. If your competitors are doing it (and they have greater visibility than you do) then this technique may help ensure that Google Scholar has access to all key information.

These meta tags are essential for indexation in Google Scholar. It is also important to link to associated PDF files using citation_pdf_url or similar. Without it, Google Scholar may incorrectly index PDFs as metadata cannot be pulled from the HTML version of the page.

Quick Tips

  • Ensure references are listed using numbers (1. – 2. – 3. -) in PDFs and are formatted in a <ol> list In the HTML source code.

Citations

Google Scholar has stated that citations are an influential factor for indexation and ranking.

Google Scholar aims to rank documents the way researchers do, weighing the full text of each document …  as well as how often and how recently it has been cited in other scholarly literature

Google Scholar

We can use the “site:” operative again here to understand citation performance. It seems that Google Scholar, when using the “site:” operative, ranks results in by citations. 

For example, oup.com has approximately 1,770,000 results with citations reaching 25,963 (top ranking article). Whereas, elifesciences.org has approximately 6,560 results with citations reaching 1,583. It seems that there is some correlation between the number of results and citations.

Oup.com

Elifesciences.org

Visibility, ranking improvements and citations work hand in hand, as visibility and ranking increase so should citations.

There are a few things you can do to push your citation count:

  • Encourage your writers to cross-reference articles that your platform has already published
  • Consider sharing data on data sharing websites or contributing to Wikipedia and cite your articles as frequently as possible. Read more about this here.

SEO Reminders

  • Ensure titles and metadata have been optimised for a given article. This will help the article’s visibility and therefore, encourage citations.
  • Add schema markup for scholarly articles in the HTML source code of an article. This won’t affect how the result is displayed in Google Scholar but will markup citations, author and related article details in Google’s SERPs. 

Contradictions with Traditional Google Search

Sitemaps: PDFs

In a discussion with a Google Scholar representative, in an attempt to increase the number of articles indexed in Google Scholar, they suggested submitting version-less PDF URLs in an XML sitemap. SEO practice dictates that similar/duplicate content should be associated back to the original content, usually with a canonical tag. 

This would suggest that the PDF duplicates of the HTML should have a canonical tag pointing to the HTML page (these will be X-robot canonical tags that live in the header of a page). However, Google said the PDFs should be in a sitemap but best practice dictates that only status code 200, non-canonical URLs should be in the sitemap.

At this point, weigh up your priorities. If the priority is to increase visibility then submitting the PDF URLs in a supplementary sitemap may be more effective. If you are taking this route, it is best to submit a supplementary sitemap to limit the risk of affecting your entire site.

Limitations

Reflecting Changes

Google Scholar has stated that “once you update your website, it can take anywhere from a few days to 6-9 months for these changes to be reflected in Google Scholar Search results”.

It is important to keep this in mind when optimising for Google Scholar, changes take time to take effect, and it seems Google Scholar will take longer to update and reflect changes. 

Monitoring performance will help you understand the effect of any changes made to your site. Changes will be reflected in Google Analytics (GA). You will find this data under Acquisition – All Traffic – Source/Medium. This is based on the assumption that your GA is correctly set up, see this Google Analytics audit checklist to ensure your Google Analytics is in tip-top condition.

Trial and Error

The lack of resources and strategies behind Google Scholar optimisation does mean that there will be a level of trial and error. Each site behaves differently, any key quirks of your site should be considered when creating a strategy. Of course, the period of time to test and experiment is elongated by Google Scholar’s delayed response to reflecting the change.

Key Learnings

  • Before attempting to optimise for Google Scholar, ensure fundamental SEO changes have been made
  • This will be a long process, don’t expect to see immediate changes or success
  • Include scholarly specific meta tags in the source code – various academic sites use more than one format
  • Add scholarly article schema markup to articles
  • To boost indexation, submit versionless PDFs URLs in a supplementary sitemap
  • Monitor when changes were made and monitor changes in indexation and rankings

Do you have any insights into the running of Google Scholar? Leave a comment below!

Google Scholar: Indexation & Ranking was originally posted by Video And Blog Marketing

PPC & SEO Synergy: Landing Page Testing, PPC & SEO Efficiencies

This post is the second of a series of 3 articles: we intend to cover a series of synergies between SEO and PPC that could help your business/clients run the two channels more efficiently and optimise the overall spending.

Part 2 includes two more synergy ideas which will be focused on the following subjects:

  • Landing page testing: we recommend testing the SEO landing page for instances where SEO & PPC landing pages differ, for the same keyword/s.
  • Strategy when both organic & paid results appear at the same time: we re-create a series of scenarios and recommend what tests to implement in instances where your site appears for organic and paid results for the same keyword/s.

Before reading the post below, I recommend you check part one here.

Landing page testing

As you might all know, while in PPC the advertiser can pick the landing page that will be shown for a specific keyword, in the SEO world this is not possible as the search engine does the decision making.

SEOs can obviously work towards the desired outcome, mapping and optimising landing pages for specific keyword groups, making sure a search engine is able to pick the page we want – but in reality, with larger sites (e-commerce in particular) with tons of pages, this process is not as easy and straightforward as we would like.

Another difference between PPC and SEO lies in the diversity of intent between the two results:

  • Generally speaking, PPC ads tend to aim for conversion – so when the landing page is picked, this happens with a site conversion in mind.
  • On the other hand, Google picks organic pages based on how well they think such pages could respond to a user query: this is an important difference we have to understand.

Have a look at the example below to picture this [note: this is a very simplified example for a brand query, picked for the purpose of this post – the landing page testing synergy is applicable to a multitude of queries, not just brand ones]

Type of Page URL Difference
PPC Page (chosen by the advertiser) /book/25DR-lion-king/ It sends the user to the booking page of Lion King, asking to choose a date – upper photo.
SEO Page (chosen by Google) /Disney-Presents-The-Lion-King-tickets/artist/975016 Editorial Lion King’s page which presents the musical (About section) and shows which events are taking place (Events section) – bottom photo

When looking at some of my clients’ account, I used to run into this discrepancy a lot, which is sort of normal considering the differences between the two channels.

PPC vs SEO results

Based on what was discussed above and the example provided, does this mean Google ‘likes’ the SEO page better than the PPC page for the queries such as “ticketmaster lion king”?

  • For an organic result: yes, because for informative or more generic queries, Google assumes that users are still at the top of the marketing funnel, where they are probably still browsing results – the average user wants to check the price and description of the lion king musical on Ticketmaster.
  • For a paid result: no, because the main objective of this activity is a conversion, which generates money to Ticketmaster and Google: win-win.

What would happen if PPC picked the SEO informative page for their ads instead of the transactional page? Would the Quality Score be impacted as a result?

My suggestion is the following:

For keywords whose Quality Score is low, it is worth testing the SEO landing page instead of the PPC one, when there is a discrepancy between the two.

And here is why:

  • A lot of times, PPC ‘lazily’ picks the destination URL without thinking too much about landing page experience (remember post one?), which is a crucial contributor to the final score, which then impacts CPC.
  • Other times, PPC picks the most transactional page to help their case: SEO pages are often too far from the conversion point, which is what PPC ultimately cares about.
  • However, not all keywords might require a transactional page: it is important to consider the user intent and act accordingly. If we are willing to bid on some top-of-the-funnel keywords, landing page experience and user intent should be the priority.
  • Testing whether landing page experience could be easily improved by switching to the SEO page is easy and is worth trying.

How to get started

To get started, these seven steps need to happen:

  1. Pick keywords with a low QS (lower than 7 is a good start)
  2. Find out if the landing pages between the SEO & PPC results are different
  3. Analyse the type of keyword and intent behind it. This step is crucial: depending on the type of keyword and intent that Google associates for that particular keyword, the outcome of this test could be very different.
  4. Check how different the SEO & PPC pages are: how far off a conversion is the SEO page? If the SEO page is far from the conversion point, then I would expect my conversion data to be noticeably impacted if I were to use it for my PPC ads.
  5. Implement the SEO page in the PPC ad and keep monitoring the quality score for the keywords where the changes have been applied to.
  6. Keep the SEO landing page where changes have been positive, revert back if not.
  7. Start the process over and check your QS frequently.

Quick recap: why is this worth it?

  • In some instances where the landing pages between SEO and PPC results differ, it is worth experimenting with SEO landing pages for PPC ads as this change can help you increase quality and lower your CPC.

Organic and Paid listing appearing at the same time: what to do?

I am sure you all had this conversation at some point in your SEO-SEM career: should we bid on this X set of keywords, for which we have already good organic visibility? Is there any point in having PPC ads if my SEO results are strong already?

Let me start with a clear statement: I do not have the answer and beware of the people who say that do! What I learnt in 5+ years of experience in digital marketing is that most of the time all you need to do to prove a point is to test things/assumptions: what works for one site might not work for another and so on. That’s why we built ODN at Distilled, testing, testing and more testing!

I am going to re-create a series of scenarios and share my thoughts on the differences of approach that you could take when discussed what to do when organic and paid listings appear at the same time.

Scenario 1: Brand keywords, good organic positioning

Imagine the following situation:

  • Keyword: brand type
  • SEO situation: ranking in position 1

The key question here is the following:

Should I or should I not bid on my brand terms, using precious PPC spend if I am already ranking 1 organically?

Reasons to do bid on your brand terms:

  • Brand defence: especially for highly competitive markets, you want to occupy as much search space as possible, so it makes sense. Also, for certain markets and situations your competitors (or retailers or partners) are allowed to use your brand terms in their ads, so in these situations, definitely do defend your name!
  • Brand awareness: a lot of people I talk to in the industry want to see their brand bid on these terms from a credibility and brand awareness point of view. If you think that is important, then do so.

See an example where it is worth bidding on your brand keyword:

  • For the query ‘halifax mortgage’ Halifax is appearing with a text ad and a couple of SEO results. It is worth noticing that there is competition for this term and that the destination URLs between the PPC ad and the first organic results are different.
  • My opinion here: keep bidding on your term.

Reason to NOT bid on your brand terms:

  • Save that cash: self-explanatory right? If there is no competition on that keyword and you think your SEO listing will absorb the traffic that a potential PPC ad would have attracted, then definitely consider a SEO-only approach.

Before going for it, I recommend building a testing framework that eliminates seasonality, takes into account all the other marketing channels you are running (they could skew the analysis otherwise) and then test if this is true. I have experienced tests where not bidding on brand terms makes absolute sense and the savings are quite substantial when applied to a large number of keywords: so why not explore the opportunity? It is worth reiterating that this would only work for brand terms where no other competitors are bidding on.

See an example where it might be worth NOT bidding on your brand keyword:

  • For the query ‘halifax mortgage calculator’ Halifax is appearing with a text ad and a couple of SEO results. In this instance, there is no competition for this term and the destination URLs between the PPC ad and the first organic results are the same.
  • My opinion here: consider an SEO-only approach.

Scenario 2: Non-brand, good organic position

Imagine the following situation:

  • Keyword: non-product type, non-brand type
  • SEO situation: ranking in position 1

There are a lot of considerations to keep in mind here, I will mention the most important ones in my opinion:

  • Volatility: are organic rankings too volatile?
  • Competition: Is the competition very tough for this keyword/cluster?

If you answer is yes to any of the above, then you clearly cannot rely on SEO to consistently be at the top of the SERP. Consider PPC to hold a position at the very top instead.

But the real key question is this:

How important is this keyword to your business?

If it is very important, you want to try and use both PPC and SEO at the same time: it will guarantee more space on the SERP (especially on mobile, where space is even more precious), therefore higher CTR. If it is not as important and you are confident that your organic result is better than the competition, then you may want to use that PPC spend on something else.

See an example below where a site occupying the organic position 1 decides not to bid on PPC: fantasticcleaners.com has no ads showing for the keyword ‘find cleaner’ despite being a high volume and high competition type of term.

Scenario 3: Product keyword, good organic position

Imagine this hypothetical situation:

  • Keyword: product type, non-brand
  • SEO situation: ranking in position 1
  • Google Shopping Ads: appearing for that keyword

As most of you know, Shopping Ads tend to appear for product-related searches where the likelihood of intent for a user is a conversion. This scenario is similar to scenario 2 and will involve the same questions: answer them with Shopping Ads in mind instead of text ads.

See an example below where a site ranks at position 1 organically and has Shopping Ad showing:

notonthehighstreet.com is appearing both on Google Shopping for ‘birthday gifts for family’ and as the top SEO result.

Scenario 4: Informative keyword, featured snippets

Imagine this fourth scenario:

  • Keyword: informative/generic keyword (non-product), non-brand
  • SEO situation: ranking in the answer box (featured snippets)

As most of you might know, you do not have to rank first organically to be eligible in the answer box (read this post to know more), so it is a very appealing opportunity for a lot of sites with less established organic results. As featured snippets occupy such a large portion of the SERP, it is quite evident that the user’s attention will be dragged there – the key question here is the following:

Do you think it is worth appearing for generic/informative terms where chances of conversions might be low (very top of the funnel activity)?

If you are trying to generate traffic and interest in your brand, why not consider it? The price of these keywords might be very cheap and not a lot of companies are interested in bidding in that space, so, as a result, it might be an opportunity worth exploring.

See an example below where a site ranks in the answer box and there are PPC ads appearing for the query:

Despite the fact that getyourguide.co.uk appears in the featured snippet, Airbnb still decides to bid on that particular query.

Always audit your landing pages: a must step before testing

Another key consideration relates to the major differences between PPC and SEO landing pages (refer to the previous paragraph about Landing page testing to understand this point).

When considering whether to ‘switch off’ PPC, always think about how well the SEO page/s could pick up that traffic.

Follow these steps to have a better idea:

  • If PPC and SEO use the same page for a particular keyword, then this applies:

    • we expect the user journey to remain the same in case the paid ads were removed, as the page between the two channels does not vary
    • by removing the PPC results (same landing page), we expect SEO to absorb most of the PPC traffic and conversions
  • If PPC and SEO use a different page for a particular keyword, then do the following:

    • Analyse the type of keyword and intent behind it – top vs bottom of the funnel
    • Check how different the SEO & PPC pages are: how far off a conversion is the SEO page? How much information and content does the PPC page display?
    • If the SEO page is significantly different (more informative or further to a conversion) than the PPC page, our expectations should be adjusted accordingly: for example, it is likely the SEO page will absorb the PPC traffic but not conversions, as the path to conversion is not comparable. So, it is likely that switching off PPC in these instances will save money, but my overall number conversions will be impacted – hence, a slightly riskier approach that should be tested.

Make sure to account for the above considerations when conducting this type of testing.

Creating a table like this in Excel/Google Sheets can really help you: see my table below, using Distilled’s SearchLove conference as a fictitious example.

Keyword PPC Landing page SEO Landing page Type of keyword PPC – Steps to conversions SEO – Steps to conversions
distilled upcoming event distilled.net/events/searchlove-london/ distilled.net/events/ Informative 1 2
book distilled event london distilled.net/events/searchlove-london/ distilled.net/events/ Transactional 1 2
distilled searchlove distilled.net/events/searchlove-london/ distilled.net/events/searchlove-london/ Not clear – generic 1 1

Quick recap: why is this worth it?

  • It is worth experimenting with your paid & organic listings for multiple reasons: from brand defence to awareness to saving you a lot of money (if applied on a large portfolio of keywords).
  • Doing so will help you understand more about your market and your audience, with the ultimate goal of improving your PPC spend and take advantage of your SEO presence.

Part 2 of our SEO & PPC synergy series terminates here. Stay tuned to read the last article on the subject, which will include two more synergy ideas and a downloadable checklist.

Don’t forget to check out Part 1 here.

PPC & SEO Synergy: Landing Page Testing, PPC & SEO Efficiencies was originally posted by Video And Blog Marketing

How to Recover From Google’s Broad Core Algorithm Updates

Last updated on

How to Recover From Google’s Broad Core Algorithm Updates

A few weeks has passed since Google rolled out their latest broad core algorithm update. Articles have circulated that highlighted the ones that experienced massive wins and other sites that experienced the opposite. The prevailing factor that I see whenever Google rolls out broad core algorithm updates are questions such as “how do I fix it” or “what did I do wrong?” or “I didn’t do anything but my site traffic improved, why is that?”. There’s a variety of answers given to them, but they don’t seem to get the whole purpose of a broad core algorithm update. But before we delve into recovering if you were hit by the broad core update, what exactly is a broad core update?

What Is a Broad Core Update?

Simply put, it’s an algorithm rolled out by Google multiple times a year that doesn’t necessarily target specific issues. A broad core update is more of an update to their main search algorithm that deals with a more holistic view of a website and its expertise, authoritativeness, trustworthiness (E-A-T), and it’s quality.

Because of these many, varying factors, Google can’t really give us what needs to change in a website without them revealing the most important aspects of their algorithm. Think about it this way: There are 150 ranking factors that have varying importance in Google’s eyes. When they roll out broad core updates, 63 of those ranking factors’ importance changes and their order is rearranged. Of course, this is just an example and not a fact. But it’s a better way of imagining what really happens during a broad core update.

Here’s an example of how different broad core updates affected one of our clients:

Screenshot of Analytics Traffic and Google Updates

This shows us how different broad core algorithm updates affect websites in a varying manner every time. The blue dots above the graph show when the broad core updates happened. So, we can infer that regular broad core algorithm updates target different aspects every time they are rolled out – not the same thing over and over again.

Google’s Take on Recovering From Broad Core Updates

Getting hit by the algorithm update is a normal occurrence especially if your website still has a lot of room for improvement. In a recent Google Webmaster Hangouts, a webmaster asked a question to Google’s own John Mueller regarding the drop in traffic for their news site. Here’s the full question:

“ We’re a news publisher website that’s primarily focusing on the business finance vertical. We probably have been impacted by the June Core Update as we’ve seen a traffic drop from the 1st week of June.

Agreed that the update specifies that there are no fixes, no major changes that need to be made to lower the impact. But for a publisher whose core area is content news, doesn’t it signal that it’s probably the content, the quality or the quantity, which triggered Google’s algorithm to lower down the quality signal of the content being put up on the website which could’ve lead to a drop of traffic?

We’re aware that many publisher sites have been impacted. In such a scenario, it would really help if Google could come out and share some advice to webmasters and websites. Not site specific, but category or vertical specific at least on how to take corrective measures and actions to mitigate the impact of core updates. It would go a long way in helping websites who are now clueless as to what impacted them.”

Screenshot of Google Webmaster Hangouts

John Mueller went on to give the best answer he could possibly give, and here’s a summary of his answer (not verbatim):

“ There’s nothing to fix since there’s no specific thing that the algorithm targets. A variety of factors that relate to a website sometimes evolves, and that affects your traffic and rankings. There are no explicit changes you can do, but there is an old blog post (published 2011) on the Webmaster Central Blog that’s basically a guide on building high-quality sites and he highly recommends that webmasters read this post.”

Watch the Webmaster Hangout

There you have it. There’s nothing to fix, but there is a lot of room to improve on. The blog post that John Mueller recommended contains a list of questions (not necessarily actual ranking signals) that would help you understand what Google thinks about when it ranks your site:

  • Would you trust the information presented in this article?
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  • Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  • Would you be comfortable giving your credit card information to this site?
  • Does this article have spelling, stylistic, or factual errors?
  • Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?

Those are just some of the questions that are listed in the blog post. Aside from originality and usefulness, another thing that came to mind while I was reading the blog post is that even before the term E-A-T was coined, Google was already treating it as an important factor for rankings. Successfully proving that whoever your site’s author is an expert that proves that the body of content is of value and that the facts in the content (and your site) are trustworthy – all of this equates to E-A-T.

How to Recover from Google’s Broad Core Updates

  • Use the Guidelines Provided by Google to Look for Inadequacies in Your Website

    • Use the guidelines (2011 blog post) to your site’s best interest. It already gives you a vision on the things that your site can do better on. Capitalize on that. It could take a lot of time and effort, but if you want to be successful in your SEO, then it’ll be worth it. Additionally, you can also read the updated Google’s Search Quality Raters Guidelines as well to deepen your understanding on Google’s standards for a high quality, useful website.
  • Improve your Site’s Expertise, Authority, and Trustworthiness (E-A-T)

    • There’s a lot of things to do here, but the first would always be to improve your author E-A-T. It’s a simple thing to talk about but difficult to do. You have to prove that you’re an expert in the area you’re writing about, and in recent times, medical and pharmaceutical websites have been the target of algorithm updates since not all of the content they have come from reputable or expert authors. Here’s a simple tip: If you start configuring your site to the guidelines highlighted in the webmaster central blog post, your E-A-T will improve as well.
  • Ask for Help

    • As the owner of a website, it is hard for you to see the faults contained in your website since you only have your own perspective to work on. But if you ask help from a community that shares the same interests or knows the same things that you do, then you can ask them to give their thoughts about your site, and you’ll be able to see the faults that you were never able to see before. The SEO community is a particularly large one and we have our fair share of intelligent and helpful people that are willing to give you their two cents. Don’t be afraid to ask for help from the community or anyone you know since it’ll help you grow as well.
  • Think Holistically

    • As I’ve mentioned, it’s a mistake to focus on a specific thing or factor when it comes to Google’s broad core updates. Sometimes, not focusing on a specific thing allows you to discover the reason why your site was impacted by the broad core update. Additionally, not focusing too much on the nitty gritty keeps you open-minded and allows you to consider many possibilities that help you diagnose your site’s traffic or ranking drop.

That’s it. It sounds simple but it’ll be a challenge for us to fully complete these tasks. But the end result will surely be rewarding, to say the least. Do you know any other things that I missed out on? Let me know in the comments below!

How to Recover From Google’s Broad Core Algorithm Updates was originally posted by Video And Blog Marketing

Breadcrumbs & SEO: What Are They and How Their Important for SEO

SEO

A trail of breadcrumbs is a storytelling element used to describe a path that one leaves to find their way back to their starting point. It began in the German fairy tale Hansel and Gretel and has since become a common reference seen throughout pop culture.

On your website, breadcrumbs are navigational tools that not only help users make their way around your site, but also appeal to Google’s search engine algorithm and have a great impact on your SEO.

Bottom line is, you need breadcrumbs on your website for both SEO and improvements to the user experience. The user experience should not be overlooked. It is one of the key factors in increasing your rate of retention, the backbone of profitability.

But what exactly are breadcrumbs? How do they help the user experience and your SEO campaign? And how can you implement them into your website?

What are Breadcrumbs?

Breadcrumbs are a navigational component of a website that helps users successfully navigate various pages without getting confused or lost.

Simply put, breadcrumbs are a text path on a website, usually placed at the top of a page, which shows the path you’ve taken to get to a certain section of the site.

Let’s say you’re on the Testimonial video page of a website. To get on that page you first visited the home page. Then you navigated to their clients page. Once there, you clicked on case studies, and from there you found the testimonial videos page.

A set of history based breadcrumbs that would highlight such a journey would look something like this:

Home > Clients > Case Studies > Testimonial Videos

The breadcrumbs showcase exactly how you journeyed throughout the site. It’s telling you not only where you are but where you’ve been previously. What’s more, each highlighted step of the journey can be clicked on to instantly navigate back to a previous page.

Another interesting factoid about breadcrumbs is that they can be featured in Google search engine results pages.

If you have breadcrumbs in place on your site, when you pop up in a Google search, the page that you’re linking to will have the breadcrumbs listed beneath the title of the page. This helps users understand where the pages they navigate to are on your site in relation to many of the main hubs.

It’s an extra feature that is more attractive to users. Even if the page the search engine is linking to is not what they’re looking for, a breadcrumb for something that they are interested in might catch their eye.

Types of Breadcrumbs

Website breadcrumbs are not so simply defined. In fact, there are multiple different types of breadcrumbs one might use on their website.

Namely, there are three different kinds of breadcrumbs. We will go into each of them in detail, explaining the differences so that you can decide which type you want to use on your site.

Hierarchy Based Breadcrumbs:

Hierarchy based breadcrumbs are the most common form of website breadcrumbs. If you’ve encountered breadcrumbs in the past, it’s very likely they were of this type.

Hierarchy based breadcrumbs tell you where you are in relation to major hubs that are found on the site. A prime example would be the home page.

Rather than tell you where you’ve been already, it navigates a path back to that main page. Meaning that no matter where you enter the site, you’ll always be able to find a way back home.

Attribute-Based Breadcrumbs:

Attribute-based breadcrumbs are usually found on an ecommerce website. They create a breadcrumb trail featuring the attributes of the product or products you’re searching for.

If someone was looking for a black men’s leather jacket in size XL, their attribute based breadcrumb trail would read something like this:

Home > Jackets > Men’s Jackets > XL > Black > Leather

This makes the product search far easier, allowing shoppers to backtrack when needed and refine their search parameters.

History Based Breadcrumbs:

As one could guess from the name, history-based breadcrumbs deal directly with your browsing history and how you’ve personally navigated the site. Breadcrumb links at the top of the page feature the various pages you’ve already visited, listed in the order that they were navigated to.

It shows the history of your journey throughout the site, negating the need to ever use a “back” button. If you entered the site via a contact us page, then navigated around to check out some features and read up on the staff before looking at the homepage, your history based breadcrumb trail would look like this.

Contact Us > Features > Staff > Home

Benefits of Breadcrumbs for Users

The main benefit that breadcrumbs have for users is that it enhances the user experience.

It cannot be overstated how vital user experience is for the success or failure of your site. You could have the greatest SEO on the planet, featured at number one for every keyword. But, if your user experience is lacking, you will find that those clicks are not converting.

You have to have a website that is both optimized for SEO and optimized for the user. Breadcrumbs help with both.

For users, they help people avoid becoming lost on your site. This reduces friction and frustration as users navigate through your pages.

Breadcrumbs are particularly helpful for users because they are a fairly common interface that your website visitors will be more familiar with.

By using breadcrumbs, it is far easier for the user to navigate back to an earlier page. They could even navigate back several pages with one click, something that a back button is unable to provide.

This should substantially lower the bounce rate of your site.

Another important function of breadcrumbs from the standpoint of website usability is that it gives users a path back to your homepage, even if they’ve never visited it previously. Remember: people don’t typically find websites through the homepage. Usually, it is through a product page or service function that was linked in a search engine results page.

That’s why every page of the site needs to be treated as an entry point, with breadcrumbs serving as a path for users entering through a product link.

Breadcrumbs serve an important function, offering an alternative navigational option so that users won’t leave. When a user enters your site through a product page, they need a clear path to the other sections of your site, particularly those where they can gain information and convert.

If they hit the back button from one of those entry points, they will just go back to the search engine results page, where they can find all of your competitors.

Make no mistake about it, the user experience matters.

Regardless of how good your SEO might be, a poorly optimized site could lead to a huge bounce rate. That’s because 79% of users will abandon a non optimized site. This is even more true for mobile users, who are five times more likely to bounce from your site if it is not set up accordingly.

Benefits of Breadcrumbs for SEO

We’ve covered why breadcrumbs are essential to the user experience, but how can they help those same users find you on Google?

Breadcrumbs are a huge bonus for Google and can have a great impact on your SEO score. It allows Google to more easily determine the structure of your site, which is more appealing in the long run toward a good score.

As we have mentioned before, breadcrumbs can become a feature on your Google search engine result, which is very appealing to users and is more likely to get clicks. This could be a great way to ensure that, even if you’re not the first option on the results page, you’re still getting a good amount of traffic.

When applying breadcrumbs to your site, it’s important to utilize structured data on the back end to make it more appealing to Google and easier to integrate into a search engine results page.

Structured data is able to directly communicate with the search engine, and completely takes all the guesswork out of the equation.

By adding breadcrumbs to your site and backing them up with structured data, you’re allowing Google to highlight your site more easily.

How to Add Breadcrumbs to Your Site

Now that we know all the benefits of adding breadcrumbs to your site, both for users and SEO, how can we go about actually doing it?

Luckily, the process has become highly simplified over the years, so you don’t need a web design degree and decades of experience to be able to incorporate them. (Though those do help)

The easiest way to incorporate breadcrumbs into your site would be to use a plugin system. There are many breadcrumb plugins that are available for a lot of the top website builders out there, including WordPress.

Using these applications, it becomes a simple matter to highlight what breadcrumbs you want and apply them, usually with nothing more than the click of a button. Obviously, this is the simplest way to incorporate breadcrumbs. Anyone can easily do this and it takes no website expertise whatsoever.

There are, of course, other more-involved ways in which to apply breadcrumbs to your site. These powerful navigational tools can also be plugged into the code manually by a web designer.

Remember, to be truly effective and catch the eye of Google’s search algorithm, breadcrumbs that are entered into a site need to be accompanied by structured data. This allows the breadcrumbs to be more easily integrated into a search engine results page.

That is why, despite the ease of WordPress or Wix plugins, it is usually best to entrust the application of website breadcrumbs to a professional web designer.

Whether this is someone working in-house or as a part of an outside firm (or freelance hire) does not matter, but if you’re going to be installing them manually with structured data, you might need some extra help.

Website code can be tricky, with one mistake leading to a slew of issues both to the design and functionality of your site. It is not something that should be left in the hands of amateurs.

When coding your breadcrumbs, remember that the breadcrumb code is often found within single.php and page.php files. These should also be inputted above the page title, so as not to cause a design issue.

When coding your breadcrumbs, make sure that you’re not adding the code to functions.php files. This will lead to otherwise preventable issues.

If this sounds like complicated technobabble to you then it’s time to invest in a professional website designer to make these structured changes.

In Conclusion

If you’re presenting a website without breadcrumbs to Google and your users, you’re doing a disservice to yourself.

When you consider how effective breadcrumbs are at creating a more informative search engine result and improving your overall customer experience, it becomes a no-brainer decision.

Incorporate website breadcrumbs into your site and help your SEO score while also making your site more accessible to users.

Breadcrumbs & SEO: What Are They and How Their Important for SEO was originally posted by Video And Blog Marketing

10 Crucial SEO Mistakes You Might Be Doing

Last updated on

Cover Photo - 10 Crucial SEO Mistakes You Might Be doing

With hundreds of ranking factors being used by Google to identify websites that deserve to be on the first page of the search results, us SEOs are getting crazy with the number of things to think about and tasks to do.

SEO is competitive. Once you get to the top page, you can never afford to make any mistake that can affect your website gravely. There is no perfect SEO campaign but these SEO mistakes are something you should take a look at.

Unoptimized Page Titles and Meta Descriptions

Page titles remain to be one of the most important on-page SEO factors. It is important for both search engines and users. I’ve seen a lot of websites only use their brand names as the page title of their homepage and that alone is causing them to miss a lot of ranking opportunities.

Page titles alone can increase organic rankings by a huge margin. This is one of the small things you can do A/B Testing with to see what works for you.

On the other hand, there were a lot of violent reactions to Google’s SEO Mythbusters episode where Google’s Webmaster Trends Analyst Martin Splitt said that Meta Descriptions is one of the top important things for SEO. While there is a lot of criticism by this statement, I am actually with him.

Yes, meta descriptions are not a direct ranking factor anymore but just like Page Titles, it is the first thing users see before entering your website.

Even if you produce high-quality content regularly but write crappy titles and descriptions, searchers would click on your website less and those clicks will belong to your competitors.

Not Doing a Regular Audit of Indexed Pages

An SEO Audit is something all SEOs should do regularly. While there are a lot of SEO Audit checklists that can be downloaded around the internet, the most important audit should be in your Google Search Console data.

The Coverage report will show you all pages being indexed by Google. If you’re seeing an absurd number of pages being indexed by Google, then you might need to de-index some pages to save crawl budget. At the same time, there might be some important pages not being indexed by Google because of a crawl error.

Fixing detected errors in the Search Console is also a must. This could be problems with indexing your content that could negatively affect your rankings.

Not Scouting Your Competitors

Competitor research should be a part of any SEO campaign. If you’re not doing this, then you’re missing out a lot.

Scouting on your competitors will give you an idea on what type of content they are publishing or what keywords they are targetting. Use this data to gather keyword suggestions and create better content and dominate them in the search results.

You also miss out on link building opportunities if you’re not spying on your competitors. Checking their backlinks and reaching out to these websites are a great way of gathering links for yourself.

Not Optimizing for Long-Tail Keywords

Long-tail keywords are often overlooked because they bring in less traffic but they are great opportunities. Often, long-tail keywords have low difficulty so it’s easier to rank for and increase a website’s SEO value. At the same time, long-tail keywords have a higher chance of converting because they cater to specific searchers.

Not Matching Content with Search Intent

Yes, content is king. We’ve heard it time and time again. But that doesn’t mean you just put out any content that you like.

Search intent, in my opinion, is one of the most important things SEOs should think about when producing content. Google is always for the user and the most important thing to them is serving the right information to searchers.

There are 4 types of search intent:

  • Informational
  • Navigational
  • Transactional
  • Commercial

Place your feet in the shoes of your audience and look at the list of keywords you have gathered. Think about what they are thinking if they search these terms. Or better yet, go ahead and search it on Google and see what results you’ll be served.

Building Links from Irrelevant Websites

You could build hundreds and thousands of links but if there are totally irrelevant to your website’s niche, Google will ignore these links and your link building campaign is a failure.

Sure, it is tempting to get links from a high authority website but link relevance is just as important as link authority.

Not Optimizing for Mobile

With the recent announcement that Google will now crawl websites using Mobile-First Indexing  by default, there are no reasons not to optimize your website for mobile. With more and more users using mobile to surf the web, Google will always favor websites that load faster and perform better on mobile that those that don’t.

To test how your website performs on mobile, you could use Google’s Mobile-Friendly Test or PageSpeed Insights. There are also other free testing tools like GTmetrix. This tools will give you valuable information that you could use to optimize your website’s speed.

If you don’t One of the easiest things you could do if you’re using WordPress, you could install a free plugin to create AMP pages for your website. Once Google index your AMP pages, users would most likely land to them seeing a lightweight version of your website.

Not Optimizing Anchor Texts

Anchor texts are both important for inbound and outbound links. Using tools like Ahrefs, you could see the top anchors for your website. Having 5,000 links is nothing if your top anchor texts include terms that are totally irrelevant to your website.

The same goes for internal links. Do not link to other pages of your websites just for the sake of linking to it and inserting your keywords. Make it look natural and don’t force it.

Keyword Cannibalization

Keyword cannibalization happens when multiple pages of a website are ranking for a single keyword. Not only is this a total waste of time but it also can be detrimental to your rankings.

This problem is often unnoticed and has caused headaches to the SEO community. If search engines see two blog posts eligible to rank for a single keyword, it confuses them resulting both articles to rank low.

Keyword cannibalization has a lot of grey areas and you should be careful before you decide to scrap an article or combine articles together.

To spot keyword cannibalization on your websites using Google Search Console, under Performance, add a keyword you are targetting under Query and it will show you all pages that have impressions for that keyword. You could also use Ahrefs to solve keyword cannibalization.

Not Having an SSL Certificate

Have you ever gone into a website and see a message that your connection is not private? Or a website that is labeled “not secure” in the URL bar? Ugly right?

A few years ago, Google made an announcement that HTTPS is a ranking factor. This is one of the basics when setting up a website and yet I see a lot of websites still running on HTTP.

This is something not only Google is concerned but also the users. Users would most likely not continue to your website or click away when they see this warning. There a lot of websites selling SSL certificate but there are also websites that offer SSL Certificates for free.

Key Takeaway

SEO mistakes happen. With hundreds of rankings factors and many details, it’s almost impossible to have a perfect SEO campaign.

But one of the things I love about SEO is the most is that one is different from the other. It’s a mix and match. Try to find what works for you and create goals to guide you in every step of the way to make fewer mistakes as possible.

Have you had an SEO mistake that affected your website badly? Share it on the comments down below!

10 Crucial SEO Mistakes You Might Be Doing was originally posted by Video And Blog Marketing

Location Extensions Augmented Advertisements

When a Google Patent uses the word “Content” they often mean advertisements, rather than just the content on a website. That was true when I wrote about a Google patent about combining advertisements and organic results in the post Google to Offer Combined Content (Paid and Organic) Search Results

I don’t often write about paid search here, but sometimes see something interesting enough to write about. We’ve been seeing some of the features from organic search appearing in paid search results, such as sitelink extensions, and Structured Snippets extensions. Google has written up extensions, which are ways of adding additional information to advertisements “to maximize the performance of text ads.”

One specific type of extension is a location extension. Location Extensions can add information to an advertisement that you bid upon that can exhibit more information to your ad, such as:

Google Ads location extensions allow you to show your business address, phone number and a map marker along with your ad text.

.

That information isn’t shown to everyone but may be shown to people within a threshold distance from an advertiser’s location. The location extensions page doesn’t provide much in the way of details as to when location extensions might be triggered which is why I thought it helpful to write about this patent application that appears to cover location extensions.

A Google patent application was published this week about location augmented advertisements. The patent tells us about when a location extension that could be shown with an ad might be triggered to show:

The method includes receiving a request for content from a user device. The method further includes identifying, by one or more processors, a content item for delivery to the user device responsive to the request. The method further includes determining a location of the user device. The method further includes determining a threshold distance that a user is likely willing to travel when visiting a physical location associated with the content item or content sponsor. The method further includes identifying a bounding region associated with the location of the user device. The method further includes identifying one or more location extensions that are associated with the content item. The method further includes determining, by one or more processors, when one of the one or more location extensions is included in the bounding region and when a distance between the location extension and a current location of the user is less than the determined threshold distance. The method further includes augmenting, based on the determining when the distance is less than the determined threshold, the content item with the one location extension.

A Think with Google article on location extensions provides more information about ways to use location extensions.

The location extensions patent application provides more details on how location extensions work. It points out the following features:

  1. The request for content can be associated with a search query, a map request or page request.
  2. The user device can be a mobile device, and location information for the user can be provided as part of the request.
  3. Determining the threshold distance can include evaluating requests from plural users and determining the threshold distance as a mathematical function derived from the evaluating.
  4. Evaluating can include evaluating driving direction requests received from users that terminate at a location associated with the one location extension.
  5. The mathematical function can be a numeric average and the threshold distance can represent an average distance a user would drive to visit the location.
  6. The threshold distance can be determined based on a characterization associated with a sponsor of the content item.
  7. The characterization can be based on a type of product or service offered by the sponsor.
  8. Identifying one or more location extensions can include identifying plural location extensions that are included in the bounding region and selecting one of the plural regions.
  9. The selecting can be a random selection.
  10. Augmenting can include providing the one location extension for presentation in proximity to the content item when displayed on the user device.
  11. Identifying a bounding region can include: identifying a first bounding region; determining that no location extensions for the content item are included in the first bounding region; identifying, based on determining that no location extensions for the content item are included in the first bounding region, a second larger bounding region; determining when one of the one or more location extensions is included in the second larger bounding region; and augmenting the content item with the one location extension.

The underlying purpose of this patent about location extensions is that they will show location information to searchers who are within a certain distance from an advertiser based upon travel time, and what they are offering. The patent application is:

Determining Relevant Business Locations Based on Travel Distances
Inventors: Derek Coatney, Eric L. Lorenzo, Yi Zhu, Amin Charaniya and Gaurav Ravindra Bhaya
Assignee: Google LLC
US Patent Application: 20190180326
Published: June 13, 2019
Filed: February 19, 2019

Abstract

Methods, systems, and apparatus include computer programs encoded on a computer-readable storage medium, including a method for providing content. A request for content is received from a user device. A content item is identified for delivery to the user device responsive to the request. A location of the user device is determined. A threshold distance is determined that a user is likely willing to travel when visiting a physical location associated with the content item or content sponsor. A bounding region associated with the location of the user device is identified. Location extensions are identified that are associated with the content item. A determination is made when one of the location extensions is included in the bounding region and when a distance between the location extension and a current location of the user is less than the determined threshold distance. The content item is augmented with the one location extension.

Takeaways

A query that triggers a location extension on an advertisement can be a query that includes a location such as [italian resturants in Carlsbad, Ca.] or it could involve a query that doesn’t include a location such as [French food].

It appears that to have working location extensions as an advertiser, you need to register your location with Google My business (and link that account with Adwords) and you have to set up location extensions in adwords. You can have multiple locations displayed as well if you have those.

Location extensions look like they could be helpful in attracting attention to local consumers who may be interested in what you offer on your site.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

Location Extensions Augmented Advertisements was originally posted by Video And Blog Marketing

Google – Yoast Team Proposes New API for WordPress Sitemap

Last updated on

Cover Photo - Google - Yoast Team Up for Proposal on XML Sitemap WordPress Core Feature

Devs from Google and from Yoast have collaborated on a new project proposal that will automatically generate XML WordPress Sitemaps by default. The proposal for the integration of XML Sitemaps to the WordPress Core has been gaining mixed reactions across the SEO community and I have been observing each one of them.

Now, here’s my take on this new feature proposal. Since most of my clients use WordPress, it would be a beneficial move to stay in the loop of things and I think you should be updated as well. Here’s what you need to know about the XML Sitemap WordPress Core Feature proposal.

What are Sitemaps?

Before I get ahead of myself, let’s backtrack and have a brief section about sitemaps. A sitemap pertains to the list of web pages that are accessible to users. Your sitemap can help you filter out the good quality pages and separate them from the pages that are not worthy of indexation. Webmasters can use the sitemap to ping Google and tell them that the pages included in this list are more important than the others. Sitemaps are also very valuable in maintaining relevance to your site because this can tell the search engines how frequently you update your website.

You can see your sitemap index by putting /sitemap.xml together with your homepage URL:

seo hacker sitemap

There is a common misconception that just because you have a sitemap, it automatically qualifies you as a success for a ranking factor. Google does not give out favors to those who have sitemaps just because they are diligent enough to create one and submit it to the search engine. It does not affect the search rankings. However, the search engines can easily find the most important content in your site because it can allow the system to better crawl your website.

The integration of XML Sitemaps to WordPress Core as a feature project

WordPress has long established itself as a great foundation for SEO because of its handy features. For one, you can customize metadata pretty easily with this site. Now, with the proposal to integrate sitemaps into its core system, this would further highlight the platform for SEO efforts.

Sitemaps greatly supplement crawling because it can improve site discoverability and accessibility. Search engines would have a better idea of what URLs are relevant to your site and what their purpose is thanks to the associated metadata.

WordPress does not generate XML sitemaps by default, which is why a team of developers from Google and Yoast proposes that the WordPress Core include their own implementation of XML sitemaps. According to them and those who echo an affirmation of this initiative, there is a universal“ need for this feature and a great potential to join forces.”

Google and WordPress came up with the proposed solution of integrating basic XML sitemaps in WordPress Core through the introduction of an XML Sitemaps API to automatically enable sitemaps by default.

The enablement of the XML sitemaps by default will make the following content types indexable:

    • Homepage
    • Posts Page
    • Core Post Types (Pages and Posts)
    • Core Taxonomies (Tags and Categories)
    • Custom Post Types
    • Custom Taxonomies
    • Users (Authors)

You can further digest this information clearly through this diagram from wordpress.org:

XML Sitemap Proposal WordPress Core

It is also important to note that the robots.txt file tagged to the WordPress will reference the sitemap index. The Sitemaps API aims to further extend its use and according to the developers, these are the list of ways that the XML Sitemaps can be maximized via the proposed API:

    • Add extra sitemaps and sitemap entries
    • Add extra attributes to sitemap entries
    • Provide a custom XML Stylesheet
    • Exclude specific post types from the sitemap
    • Exclude a specific post from the sitemap
    • Exclude a specific taxonomy from the sitemap
    • Exclude a specific term from the sitemap
    • Exclude a specific author from the sitemap
    • Exclude a specific author with a specific role from the sitemap

Once the project proposal has been rolled out fully, it can cater to most WordPress content types and it can help webmasters fulfill the minimum requirements to be indexed in the search engine. However, there are some items that the developers are not prioritizing for the initial integration:

  • Image sitemaps
  • Video sitemaps
  • News sitemaps
  • User-facing changes like UI controls in order to exclude individual pages from the sitemap
  • XML Sitemaps caching mechanisms
  • I18n

How can the API help you maximize your sitemap?

Sitemaps are an important factor especially if you want to stay ahead of your competition in the SERPs. You should pay special attention and care to it because it can make the difference between a success in your site’s performance and its stagnant growth.

In addition, the XML sitemap proposal also highlighted that there can also be a form of leverage for the standard internationalization functionality provided by the WordPress Core which will help all those sites to be competitive in terms of localized content. The sitemap is a bold promotion of the best web development practices for SEO. Knowing Yoast, this would be a good reach into the way people do SEO since the API can greatly affect optimization of sitemaps.

Although it is not a direct ranking factor, you should take it to mind that you would not rank for your most important content without a sitemap. This is why the integration of an XML sitemap to WordPress by default can be a step above the usual SEO practices. Additionally, the feature will be at home in the core of the WordPress site because they can be especially useful for caching during optimization. It goes on to show that it can also improve the speed and performance of your site.

Key Takeaway

The team is still in the middle of crafting this API, as evidenced by Thierry Muller, Developer Relations Program Manager at Google and Former Engineering Program Director at WordPress, his parting words were:

Your thoughts on this proposal would be greatly valued. Please share your feedback, questions or interest in collaboration by commenting on this post.

There is a wide audience of webmasters who are interested in working on the project as evidenced by the make.wordpress.org comment section on this particular post and the buzz in the Twitter community, so we cannot review this proposal accordingly until it is fully integrated into WordPress.

It would be nifty to enable sitemaps by default but you also have to think about the cons of this; one of which is that this API might clash with the plugin you installed to generate a sitemap. The question for this would be, “Would this API disregard these kinds of plugin features if this would be fully implemented on WordPress?”

Since this is still in the works, let’s just stay tuned for more updates.

Google – Yoast Team Proposes New API for WordPress Sitemap was originally posted by Video And Blog Marketing

A Week Into Google’s June 2019 Broad Core Update

Last updated on

A Week Into Google’s June 2019 Broad Core Update

It has been a week since Google rolled out their pre-announced June 2019 broad core update. Leading SEO news websites such as Search Engine Journal and many others have already published articles regarding some wins and losses. This is one of the rare instances where Google has let us know ahead of time that they’re going to be updating their algorithm instead of just releasing it and letting us SEOs find out the hard way.

They officially rolled out the update last June 4 and it’s exactly been a week since. I wanted to share some findings and how you can improve your understanding of sudden algorithm changes that happen throughout the year.

Effects of the June 2019 Broad Core Update

I started checking the effects of the broad core update yesterday since it’s safe to assume that they fully rolled out the complete update after a couple of days of its official release. Here’s what I found:

Analytics Client Data

 

Analytics Client Data graph

 

Analytics big Client Data

The images above contain the traffic graph for three of our biggest and oldest clients and, as you may see, there aren’t notable changes to their traffic. It’s consistent, steady, and the trend hasn’t changed. Although one of them experienced an extremely minimal drop in traffic during the day that the broad core algorithm was rolled out, it could mostly be disregarded since it’s steadily climbing back up.

This is different from what Danny Sullivan, Google’s resident Search Liaison said in his tweet:

Danny Sullivan Tweet Screenshot

He explicitly stated that core updates are definitely noticeable. But maybe we’re one of the lucky ones that did not get affected by the broad core algorithm.

However, I can’t say the same for others. One of the leading cryptocoin news websites, CCN, was massively affected by the broad core update – leading to them shutting down since their loss of revenue was too great to support their team. This is what they said:

“Google’s June 2019 Core Update rolled out on June 3th 2019 and CCN’s traffic from Google searches dropped more than 71% on mobile overnight.

Our daily revenue is down by more than 90%.”

This is the most extreme result we can experience. Having to close down a profitable website all because of a single algorithm update by Google. You can attribute this to a variety of factors, but to really know the reason behind the loss of traffic, Google is the only one that can answer that.

SEO Tools are not Perfect

Although 3 of our biggest clients were not majorly affected, one of our oldest partners had an alarming drop after we checked it. Here’s a screenshot:

The date when our traffic dropped coincides with the date of the broad core update release, so it was alarming for us. We ran diagnostics on the website, checked if there was an attack or if there was something that violated Google’s guidelines. So far, we didn’t find anything that might have caused the drop. I decided to check other tools that could confirm if this was true. Here are some screenshots:

Google Search Console Traffic Data

The red arrow points to when the update was rolled out and as you can see, a drop didn’t happen. I needed to be sure and I check another tool:

Ahrefs Traffic Data

After finding this out, I was sure that there wasn’t anything wrong with our client’s traffic. We just needed to fix something with regard to the implementation of our Google Analytics’ tracking code.

Never rely on just one tool. It’s always best for us SEOs to have a variety of tools at our disposal because we can never hope to be too accurate. Having a variety of tools to cross-check your site’s data is the safest way to conduct site checks. If you rely heavily on a single tool, you might be looking at inaccurate or downright wrong data – which you’ll show to your client if you give them regular reports.

Key Takeaway

SEOs have to always be prepared to adapt and change our strategies accordingly since Google updates their algorithm regularly. For us to be able to do that, we need accurate data and effective tools to help with our campaigns. Google’s June Broad Core Update is just one among the many algorithm updates they’ve released and I’m sure this is not the last one.

Lastly, if you know that whatever you’re doing is in line with Google’s guidelines and all the strategies that you do are white-hat, you don’t have to worry about Google’s regular updates. Although we can never predict what Google will do next, it’s better to be safe than sorry. So far, the June Broad Core Update has had minimal effects on us and our clientele. What about you? Tell me about it in the comments below!

A Week Into Google’s June 2019 Broad Core Update was originally posted by Video And Blog Marketing