E-commerce SEO: guide on when to create and optimise new categories pages

Without considering a website’s homepage, category pages on e-commerce sites generate most of your organic traffic – are any of you surprised by this statement? If this comes as a shock, I have bad news: you might need to reconsider your information architecture. If you have done your job right then you have nothing to worry about.

Curious about how much organic traffic category pages actually account for, I decided to dig into the Google Search Console of a client of Distilled which has been a very successful e-commerce site for several years. These were my findings over the past 6 months (at the time of writing, November 2018).

Bear in mind this is just an example that shows a fictitious URL structure for an average e-commerce site – the level of category and subcategory pages often differs between sites.

Type of page Proportion of Clicks Example URL
Category pages 5.0% example.co.uk/mens-jeans
1st level subcategory pages 25.0% example.co.uk/mens-jeans/skinny-jeans
2nd level subcategory pages 16.5% example.co.uk/mens-jeans/skinny-jeans/ripped
Homepage 40.0% example.co.uk
Non-transactional pages 5.0% example.co.uk/about-us
Product pages 8.5% example.co.uk/black-ripped-skinny-jeans-1

This simple exercise very much confirms my thesis: category & subcategory pages on e-commerce sites do account for the biggest chunk of organic traffic outside the homepage – in our case, about 50% of the total.

So, we have now shown an example of how important these pages are from an organic standpoint without answering the question of why. Let’s take a step back and look the bigger picture.

Why are category pages so important for SEO?

Put simply, users are more likely to search for generic, category-like keywords with strong intent rather than very specific product names. If I want to buy a new jumper, chances are I will start searching for broad search queries, such as “men’s jumpers”  or “men’s jumpers 2018” with the potential addition of “brand/retailer” to my query, instead of a very precise and long tail search. Who really searches for “Tommy Jeans icon colorblock cable knit jumper in navy multi” anyway unless you are reading from the label, right? For such specific searches, it is your product pages’ job to ‘capture’ that opportunity and get the user as close to a conversion as possible.

Having optimised category and subcategory pages that match users’ searches makes their life much easier and helps search engines better crawl and ‘understand’ your site’s structure.

Image source: Moz

Sitting close to the top of the hierarchy, category pages also benefit from more internal link equity than deeper or isolated pages (more on this later). And let’s not forget about backlink equity: in most instances, category and subcategory pages receive the highest amount of external links pointing to them. As far as we know in 2018, links remain one of the most important off-site ranking factors in the SEO industry, even according to reputable sources like SEMrush.

By now three main elements should be clear: for e-commerce sites, category pages are key from a traffic, information architecture and linking point of view. Pretty important, right? At this stage, the next question is simple: how do we go about creating new pages then?

Creating new category pages: when and why

Before starting with the process, ask yourself the following questions:

  1. What is my main objective when opening a new page? What am I trying to achieve?

If your intent is to capitalise on new keywords which show an opportunity from a search volume standpoint and/or to improve the hierarchical structure of your site for users to find products in an easier manner, then well-done, move to the next question.

  1. Do I have enough products to support new categories? Are my current category pages thin already or do they contain enough items?

In order for category pages to be relevant and carry enough SEO weight, they should be able to contain ‘some’ products. Having thin or empty category pages makes no sense and Google will see it the same way: both the SEO and UX value associated with them would drag the page’s rankings down. There is no magical minimum number I would recommend, just use your logic and think about the users first here (please at least show more than 2 products per category page though).

  1. When should I think of opening new category pages? Which instances are recommended?

Normally speaking, you should always keep an eye on your categorisation, so it is not a one-time task. It is vital that you regularly monitor your SEO performance and spot new opportunities that can help you progress.

As for specific instances, some of the following situations might be a good time for you to evaluate your category pages. Marketing or branding are pushing for new products? Definitely, a good time to think about new category pages. A new trend/term has gone viral? Think about it. 2019 is approaching and you are launching new collection? Surely a good idea. A site migration is another great chance to re-evaluate your category (and subcategory) pages. Whatever form of migration you are going through (URL restructuring, domain change, platform change etc..) it is vital to have a plan on what to do with your category pages and re-assessing your full list is a good idea.

Always bear in mind to have a purpose when you create a new page, don’t do it for the sake of it or because of some internal pressure that might encourage you to do so: refer to point 1 & 2 and prove the value of SEO when making this decision. You might soon end up with more risks than benefits if you don’t have a clear idea in mind.

How to identify the opportunity to open new categories

After having touched on some key considerations before opening new category pages, let’s now go through the process of how to go about it.

Keyword research, what else..

Everything starts with keyword research, the backbone of any content and SEO strategy.

When you approach this task, try and keep an open mind. Use different tools and methodologies, don’t be afraid to experiment.

Here at Distilled, we love Ahrefs so check out a post on how to use it for keyword research.

Here is my personal list of things I use when I want to expand my keyword research a bit further:

  • Keyword Planner (if you have an AdWords account with decent spending, otherwise data ranges are a downer)
  • Ahrefs: see the post to know why it is so cool
  • SEMrush: particularly interesting for competitive keyword research (more on that later)
  • Keyword Tool: particularly useful to provide additional suggestions, and also provides data for many platforms other than Google
  • Answer The Public: great tool to find long tail keywords, especially among questions (useful for featured snippets), prepositions and comparisons

(Data being obscured unless a pro version is paid for)

If you find valuable terms with significant search volume, then bingo! That is enough to support the logic of opening new category pages. If people are searching for a term, why not have a dedicated page about it?

Look at your current rankings

Whatever method or tool you are using to track your existing organic visibility, dig a bit deeper and try to find the following instances:

  • Are my category pages ranking for any unintentional terms? For example, if my current /mens-jumpers page is somehow ranking (maybe not well) for the keyword “cardigans”, this is clearly an opportunity, don’t you think? Monitor the number of clicks those unintentional keywords are bringing and check their search volume before making a decision.
  • Is my category page ranking for different variations of the same product? Say your /mens-jumpers page is also ranking (maybe not well) for “roll neck jumpers”, this might be an opportunity to create a subcategory page and capitalise on the specific product type is offering.
  • Are my product pages ranking for category-level terms? This is clearly an indication I might need a category page! Not only will I be able to capitalise on the search volume of that particular category-level keyword, but I would be able to provide a better experience for the user who will surely expect to see a proper category page with multiple products.
  • Last but not least: are my category pages being outranked by my competitors’ subcategory pages for certain keywords? For instance, you dig into your GSC or tracking platform of choice and see that, for a set of keywords, your /mens-jeans page is outranked by not the equivalent category pages you have (mens-jeans), but by more refined subcategory pages such as /slim-fit-jeans or /black-jeans. Chances are your competitors have done their research and targeted clear sets of keywords by opening dedicated subcategory pages while you have not – keep reading to learn how to quickly capitalise on competitors’ opportunities.

Check your competition

Most of the times your competitors have already spotted these opportunities, so what are you waiting for? Auditing your competition is a necessary step to find categories you are not covering.

Here is my advice when it comes to analysing competitors’ category and subcategory pages:

  1. Start by checking their sites manually. Their top navigation combined with the faceted navigation menu will give you a good idea of their site structure and keyword coverage.

  1. Use tools like Ahrefs or SEMrush to audit your competitors. If interested in how to use SEMrush for this purpose, check out this post from another Distiller.

  1. Do a crawl of competitor sites to gather information in bulk about their category URLs and meta data: page titles, meta descriptions and headings. Their most important keywords will be stored there, so it is a great place to investigate for new opportunities. My favourite tool in this regard is Screaming Frog.

Content on category pages

Different SEOs have different views on this as it is quite a controversial topic.

Just take a look at an internal Slack conversation on the topic – we surely like to debate SEO at Distilled!

Some say that descriptions on category pages are purely done for SEO purposes and have very little value for the user. As with many other things, it depends on how it is done, in my opinion. Many ecommerce sites out there tend to have a poorly written 150 to 250 character description of the category, often stuffed with keywords, and either placed at the top or the bottom of the page.

Look at the example below from a fashion retailer: the copy is placed at the bottom of a handbags and purses page, so the user would need to scroll all the way down just to see it, but most importantly it does not add any value as it is extremely generic and too keyword-rich.

My preference is the following:

  • short but unique description which can be expanded/collapsed by the user (especially convenient on mobile where space is precious)
  • keyword-informed description in a way that is useful to the user and provides additional information compared to the meta data (yes, makes that extra effort)
  • placement at the top of the page and not at the bottom so it gets noticed by the user and the search engine’s bot

By using the description as a useful addition to our page’s meta data, we are helping Google understand what the page is about – especially for new pages that the search engine is crawling for the first time.

Also, let’s not forget about internal linking opportunities such content may be able to offer, especially for weaker/newer pages we may have on the website (more on this later).

Looking closely at the previous example on Next’s Men’s Knitwear page, we can see how they used the copy on the Men’s Knitwear page for internal linking purposes.

Have you considered your Quality Score?

Also, very important note, a description hugely helps an underrated element of our digital marketing: our PPC’s Quality Score, which is an aggregated estimate of the quality of your ads. Since category pages then to be the main destination for PPC ads, we should do whatever is in our power to improve the quality and efficiency of our work.

Landing page experience is “Google Ads measure of how well your website gives people what they’re looking for when they click your ad accounts” and is one of the most important elements of a keyword’s quality score. By using the category page’s content to cover some of the keywords we are targeting in our ad copies, we are heavily impacting our overall quality score, which directly impacts our CPC, hence the whole efficiency of our account!

What about the risks of creating new pages?

Creating new category page is a delicate decision that should be thought through carefully as it does have its risks.

Be aware of keyword cannibalisation

You are at the stage where you have decided to create a new category page and are about to focus on writing great meta data and description for your new page, off the back of the keyword research and other tips provided above – great! Before rushing into copywriting, take a minute to evaluate the potential risk of keyword cannibalisation. This is an often forgotten task that will save you a lot of time further down the line in case you do happen to come across this issue once the new category pages have been created.  When doing so, it is important to make sure your new page’s meta data does not cannibalise your existing pages.

The risk of cannibalisation is very real: having pages which are too closely related from an SEO standpoint, especially when it comes to on-page elements (title tags, headings in particular) and content on the page, can cause some serious setbacks. As a result, the pages suffering from this problem will not live up to their full organic potential and will compete for the same keywords. Search engines will struggle to identify which page to favour for a certain keyword / group of keywords and you will end up being worse off.

An example of minor keyword cannibalisation can be seen on this website: https://www.mandmdirect.com/

Their Men’s Knitwear page title is the following:

Mens Jumpers, Cardigans & Knitwear, Cheap V-Neck Cable Knit Jumpers, UK Sale | MandM Direct

Not only it is overly optimised and too long, but it clashes with its subcategory pages which are optimised for most of the terms already included in the parent page’s title.

Their Men’s V Neck Jumpers page title is the following:

Cheap Mens V-Neck Jumpers | MandM Direct

When opening their subcategory page, Men’s V-Neck Jumpers for instance, I personally would have tried to rede-optimise the parent page’s title in order to allow the subcategory page to live up to its full potential for its key terms:

De-optimised Men’s Knitwear page title:

Mens Jumpers, Cardigans & Knitwear | UK Sale | MandM Direct

How do you prevent this from happening? Do your research, monitor your keywords and landing pages and make sure to write unique meta data & on-page content. Also, don’t be afraid to re-optimise and experiment with your existing meta data when opening new categories. Sometimes it will take you more than one attempt to get things right.

Crawl budget concerns

Google defines crawl budget as “the number of web pages or URLs Googlebot can or wants to crawl from your website”.

One of the arguments against opening new category pages might be crawl budget consumption. For large e-commerce sites with millions of pages, opening many new category pages might come as a risk in a way that could prevent some parts of your site not to be crawled anymore or not as often.

In my opinion, this is a concern only for (very) large e-commerce sites which are not necessarily well-maintained from an SEO point of view. Gary Illyes from Google seems to be on my side:

Source: https://www.searchenginejournal.com/gary-illyes-whats-new-in-google-search-pubcon-keynote-recap/274273/?ver=274273X3

In particular, a well-structured and optimised faceted navigation is vital not to run into crawl budget issues, so I recommend reading this MOZ post written by another Distiller.

By following overall SEO guidelines and regularly checking Google Search Console and server logs, it is possible to determine if your site has a crawl budget issue.

If interested, learn more about server logs.

Internal linking equity

This is more of a real problem than crawl budget, in my modest opinion, and here is why: creating additional pages means that the internal linking equity across your site gets re-distributed. If not closely monitored, you might end up diluting it without a clear process in mind or, worse, wasting in across the wrong pages.

My colleague Shannon wrote a great piece on how to optimise website internal linking structure.

When creating new pages, make sure to consider how your internal link equity gets impacted: needless to say that opening 10 pages is very different than opening 1000! Focus on creating more inlinks for important pages by exploring options such as navigation tabs (main and side navigations) and on-page content (remember the paragraph above?).

The rule of thumb here is simple: when approaching new category pages, don’t forget to think about your internal linking strategy.


Category pages are the backbone of e-commerce sites, hence they should be closely monitored by SEOs and webmasters. They are vital from an information architecture and internal (and external) linking point of view, and attract the most amount of traffic (beyond the homepage). By following the above tips, it becomes easier to identify opportunities where new category pages can be ‘opened’ in order to capitalise on additional organic traffic.

I am curious to hear other people’s opinions on the topic, so please get in touch with me or Distilled by using the comments below or my email: samuel.mangialavori@distilled.net.

E-commerce SEO: guide on when to create and optimise new categories pages was originally posted by Video And Blog Marketing

WordPress 5.0 Out: New Features You Need to Know Before Updating

Last updated on

WordPress 5.0 Out: New Features You Need to Know Before Updating

After a few months of waiting, WordPress has finally released their most important update of the year. WordPress 5.0 aims to make the popular CMS become more accessible for new and old users and create a more streamlined editor that provides a wide variety of options that allow different kinds of content to be created in your website.

The update is centered upon the newest WordPress editor called Gutenberg. This new editor introduces new features that enable more flexibility and versatility to provide better content. Before the release of the update, Gutenberg was released to users for regular WordPress users to experience the new editor before the official release. This early release enabled users to express their concerns and see the benefits of the editor becoming more flexible and accessible.

With these new updates in mind, here are things that you need to know to make sure that your latest WordPress update would work smoothly.

Gutenberg Editor

As mentioned, the highlight of the WordPress 5.0 update is the new Gutenberg editor. This new editor features a system called Blocks, which allow users to be able to add content much easier by adding a block that would allow you to add images, text, and different kinds of media by simply dragging and dropping files.

WordPress Gutenberg

As discussed one of our earlier posts about WordPress 5.0, there are at least 15 different kinds of blocks to accommodate different kinds of files and content types about coding to be able to create content using WordPress, which makes it attractive for newcomers wanting to set up their own website.

Another handy feature that makes integrating old and new content is the ability to switch to the “Classic” WordPress editor when it comes to modifying old content. This ensures that the transition from the old and new editing systems proceeds smoothly.

With CMS like Wix and Squarespace promoting their easy-to-use systems to a host of new users, the new Gutenberg editor by WordPress allows them to compete by having a more simplified system to create websites that can generate revenue and traffic.

Plugin Compatibility

Like the new update releases, one of the main user concerns is the issue of compatibility of WordPress plugins. This means that some plugins might not be readily compatible with the current WordPress update, which can lead to issues with website functionality. This can lead to some of your standard website functions that working from the start, which can affect traffic and possible revenue.

The best process to solve this issue is to ensure that the developers of these plugins would be able to address the issue and release an updated version. While this issue is being remedied, it is best to temporarily disable the plugin to prevent any functionality issues that may affect other elements of your website.

Impact on Content Marketing

With a user-friendly system of creating content, this makes content marketing on WordPress an even better experience overall. This allows more creative and dynamic content, such as blog posts that contain more interactive forms of media, allowing for visually impressive works to be published. Quality content has been proven to drive more traffic and is an indicator of reputable and trustworthy content that would rank in SERPs.

Creating dynamic content also challenges content creators to craft more diverse articles, making content marketing more competitive. Along with content that is visually more impressive, developers are now able to create a better user experience for users thanks to the new editor. The simplified editor allows users to ensure that navigation within their website is a more efficient experience, which is a definite plus.

Key Takeaway

WordPress 5.0 is finally here, and while it may have taken more waiting, the overall package looks promising, as it allows WordPress to become more accessible than ever. With new CMS software emerging and becoming popular, this new version of WordPress ensures that it would be able to attract newer users while optimizing existing features to make the creative experience better.

If you have questions and inquiries about WordPress and SEO in general, leave a comment below.

WordPress 5.0 Out: New Features You Need to Know Before Updating was originally posted by Video And Blog Marketing

Interviewing Google’s John Mueller at SearchLove: domain authority metrics, sub-domains vs. sub-folders and more

I was fortunate enough to be able to interview Google’s John Mueller at SearchLove and quiz him about domain authority metrics, sub-domains vs. sub-folders and how bad is ranking tracking really.

I have previously written and spoken about how to interpret Google’s official statements, technical documentation, engineers’ writing, patent applications, acquisitions, and more (see: From the Horse’s Mouth and the associated video as well as posts like “what do dolphins eat?”). When I got the chance to interview John Mueller from Google at our SearchLove London 2018 conference, I knew that there would be many things that he couldn’t divulge, but there were a couple of key areas in which I thought we had seen unnecessary confusion, and where I thought that I might be able to get John to shed some light. [DistilledU subscribers can check out the videos of the rest of the talks here – we’re still working on permission to share the interview with John].

Mueller is Webmaster Trends Analyst at Google, and these days he is one of the most visible spokespeople for Google. He is a primary source of technical search information in particular, and is one of the few figures at Google who will answer questions about (some!) ranking factors, algorithm updates and crawling / indexing processes.


New information and official confirmations

In the post below, I have illustrated a number of the exchanges John and I had that I think revealed either new and interesting information, or confirmed things we had previously suspected, but had never seen confirmed before on the record.

I thought I’d start, though, by outlining what I think were the most substantial details:

Confirmed: Google has domain-level authority metrics

We had previously seen numerous occasions where Google spokespeople had talked about how metrics like Moz’s Domain Authority (DA) were proprietary external metrics that Google did not use as ranking factors (this, in response to many blog posts and other articles that conflated Moz’s DA metric with the general concept of measuring some kind of authority for a domain). I felt that there was an opportunity to gain some clarity.

“We’ve seen a few times when people have asked Google: “Do you use domain authority?” And this is an easy question. You can simply say: “No, that’s a proprietary Moz metric. We don’t use Domain Authority.” But, do you have a concept that’s LIKE domain authority?”

We had a bit of a back-and-forth, and ended up with Mueller confirming the following (see the relevant parts of the transcript below):

  1. Google does have domain level metrics that “map into similar things”
  2. New content added to an existing domain will initially inherit certain metrics from that domain
  3. It is not a purely link-based metric but rather attempts to capture a general sense of trust


Confirmed: Google does sometimes treat sub-domains differently

I expect that practically everyone around the industry has seen at least some of the long-running back-and-forth between webmasters and Googlers on the question of sub-domains vs sub-folders (see for example this YouTube video from Google and this discussion of it). I really wanted to get to the bottom of this, because to me it represented a relatively clear-cut example of Google saying something that was different to what real-world experiments were showing.

I decided to set it up by coming from this angle: by acknowledging that we can totally believe that there isn’t an algorithmic “switch” at Google that classifies things as sub-domains and ranks them deliberately lower, but that we do regularly see real-world case studies showing uplifts from moving, and so asking John to think about why we might see that happen. He said [emphasis mine]:

in general, we … kind of where we think about a notion of a site, we try to figure out what belongs to this website, and sometimes that can include sub-domains, sometimes that doesn’t include sub-domains.

Sometimes that includes sub-directories, sometimes that doesn’t include specific sub-directories. So, that’s probably where that is coming from where in that specific situation we say, “Well, for this site, it doesn’t include that sub-domain, because it looks like that sub-domain is actually something separate. So, if you fold those together then it might be a different model in the end, whereas for lots of other sites, we might say, “Well, there are lots of sub-domains here, so therefore all of these sub-domains are part of the main website and maybe we should treat them all as the same thing.”

And in that case, if you move things around within that site, essentially from a sub-domain to a sub-directory, you’re not gonna see a lot of changes. So, that’s probably where a lot of these differences are coming from. And in the long run, if you have a sub-domain that we see as a part of your website, then that’s kind of the same thing as a sub-directory.

To paraphrase that, the official line from Google is:

  1. Google has a concept of a “site” (see the discussion above about domain-level metrics)
  2. Sub-domains (or even sub-folders) can be viewed as not a part of the rest of the site under certain circumstances
  3. If we are looking at a sub-domain that Google views as not a part of the rest of the site, then webmasters may see an uplift in performance by moving the content to a sub-folder (that is viewed as part of the site)

Unfortunately, I couldn’t draw John out on the question of how one might tell in advance whether your sub-domain is being treated as part of the main site. As a result, my advice remains the same as it used to be:

In general, create new content areas in sub-folders rather than sub-domains, and consider moving sub-domains into sub-folders with appropriate redirects etc.

The thing that’s changed is that I think that I can now say this is in line with Google’s official view, whereas it used to be at odds with their official line.

Learning more about the structure of webmaster relations

Another area that I was personally curious about going into our conversation was about how John’s role fits into the broader Google teams, how he works with his colleagues, and what is happening behind the scenes when we learn new things directly from them. Although I don’t feel like we got major revelations out of this line of questioning, it was nonetheless interesting:

I assumed that after a year, it [would be] like okay, we have answered all of your questions. It’s like we’re done. But there are always new things that come up, and for a lot of that we go to the engineering teams to kind of discuss these issues. Sometimes we talk through with them with the press team as well if there are any sensitivities around there, how to frame it, what kind of things to talk about there.

For example, I was curious to know whether, when we ask a question to which John doesn’t already know the answer he reviews the source code himself, turns to an engineer etc. Turns out:

  1. He does not generally attend search quality meetings (timezones!) and does not review the source code directly
  2. He does turn to engineers from around the team to find specialists who can answer his questions, but does not have engineers dedicated to webmaster relations

For understandable reasons, there is a general reluctance among engineers to put their heads above the parapet and be publicly visible talking about how things work in their world. We did dive into one particularly confusing area that turned out to be illuminating – I made the point to John that we would love to get more direct access to engineers to answer these kinds of edge cases:

Concrete example: the case of noindex pages becoming nofollow

At the back end of last year, John surprised us with a statement that pages that are noindex will, in the long run, eventually be treated as nofollow as well.

Although it’s a minor detail in many ways, many of us felt that this exposed gaps in our mental model. I certainly felt that the existence of the “noindex, follow” directive meant that there was a way for pages to be excluded from the index, but have their links included in the link graph.

What I found more interesting than the revelation itself was what it exposed about the thought process within Google. What it boiled down to was that the folk who knew how this worked – the engineers who’d built it – had a curse of knowledge. They knew that there was no way a page that was dropped permanently from the index could continue to have its links in the link graph, but they’d never thought to  tell John (or the outside world) because it had never occurred to them that those on the outside hadn’t realised it worked this way [emphasis mine]:

it’s been like this for a really long time, and it’s something where, I don’t know, in the last year or two we basically went to the team and was like, “This doesn’t really make sense. When people say noindex, we drop it out of our index eventually, and then if it’s dropped out of our index, there’s canonical, so the links are kind of gone. Have we been recommending something that doesn’t make any sense for a while?” And they’re like, “Yeah, of course.”

More interesting quotes from the discussion

Our conversation covered quite a wide range of topics, and so I’ve included some of the other interesting snippets here:

Algorithm changes don’t map easily to actions you can take

Googlers don’t necessarily know what you need to do differently in order to perform better, and especially in the case of algorithm updates, their thinking about “search results are better now than they were before” doesn’t translate easily into “sites that have lost visibility in this update can do XYZ to improve from here”. My reading of this situation is that there is ongoing value to the work SEOs to do interpret algorithm changes and longer-running directional themes to Google’s changes to guide webmasters’ roadmaps:

[We don’t necessarily think about it as] “the webmaster is doing this wrong and they should be doing it this way”, but more in the sense “well, overall things don’t look that awesome for this search result, so we have to make some changes.” And then kind of taking that, well, we improved these search results, and saying, “This is what you as a webmaster should be focusing on”, that’s really hard.

Do Googlers understand the Machine Learning ranking factors?

I’ve speculated that there is a long-run trend towards less explainability of search rankings, and that this will impact search engineers as well as those of us on the outside. We did at least get clarity from John that at the moment, they primarily use ML to create ranking factors that feed into more traditional ranking algorithms, and that they can debug and separate the parts (rather than a ML-generated SERP which would be much less inspectable):

[It’s] not the case that we have just one machine learning model where it’s like oh, there’s this Google bot that crawls everything and out comes a bunch of search results and nobody knows what happens in between. It’s like all of these small steps are taking place, and some of them use machine learning.

And yes, they do have secret internal debugging tools, obviously, which John described as:

Kind of like search console but better

Why are result counts soooo wrong?

We had a bit of back-and-forth on result counts. I get that Google has told us that they aren’t meant to be exact, and are just approximations. So yeah, sure, but when you sometimes get a site: query that claims 13m results, you click to page 2 and find that there are only actually 11 – not 11m, actually just 11, you say to yourself that this isn’t a particularly helpful approximation. We didn’t really get any further on this than the official line we’ve heard before, but if you need that confirmed again:

We have various counters within our systems to try to figure out how many results we approximately have for some of these things, and when things like duplicate content show up, or we crawl a site and it has a ton of duplicate content, those counters might go up really high.

But actually, in indexing and later stage, I’m gonna say, “Well, actually most of these URLs are kinda the same as we already know, so we can drop them anyway.”

So, there’s a lot of filtering happening in the search results as well for [site: queries], where you’ll see you can see more. That helps a little bit, but it’s something where you don’t really have an exact count there. You can still, I think, use it as a rough kind of gauge. It’s like is there a lot, is it a big site? Does it end up running into lots of URLs that are essentially all eliminated in the end? And you can kinda see little bit there. But you don’t really have a way of getting the exact count of number of URLs.

More detail on the domain authority question

On domain authority question that I mentioned above (not the Moz Domain Authority proprietary metric, but the general concept of a domain-level authority metric), here’s the rest of what John said:

I don’t know if I’d call it authority like that, but we do have some metrics that are more on a site level, some metrics that are more on a page level, and some of those site wide level metrics might kind of map into similar things.

the main one that I see regularly is you put a completely new page on a website. If it’s an unknown website or a website that we know tends to be lower quality, then we probably won’t pick it up as quickly, whereas if it’s a really well-known website where we’ll kind of be able to trust the content there, we might pick that up fairly quickly, and also rank that a little bit better.

it’s not so much that it’s based on links, per se, but kind of just this general idea that we know this website is generally pretty good, therefore if we find something unknown on this website, then we can kind of give that a little bit of value as well.

At least until we know a little bit better that this new piece of thing actually has these specific attributes that we can focus on more specifically.

Maybe put your nsfw and sfw content on different sub-domains

I talked above about the new clarity we got on the sub-domain vs. sub-folder question and John explained some of the “is this all one site or not” thinking with reference to safe search. If you run a site with not safe for work / adult content that might be filtered out of safe search and have other content you want to have appear in regular search results, you could consider splitting that apart – presumably onto a different sub-domain – and Google can think about treating them as separate sites:

the clearer we can separate the different parts of a website and treat them in different ways, I think that really helps us. So, a really common situation is also anything around safe search, adult content type situation where you have maybe you start off with a website that has a mix of different kinds of content, and for us, from a safe search point of view, we might say, “Well, this whole website should be filtered by safe search.”

Whereas if you split that off, and you make a clearer section that this is actually the adult content, and this is kind of the general content, then that’s a lot easier for our algorithms to say, “Okay, we’ll focus on this part for safe search, and the rest is just a general web search.”

John can “kinda see where [rank tracking] makes sense”

I wanted to see if I could draw John into acknowledging why marketers and webmasters might want or need rank tracking – my argument being that it’s the only way of getting certain kinds of competitive insight (since you only get Search Console for your own domains) and also that it’s the only way of understanding the impact of algorithm updates on your own site and on your competitive landscape.

I struggled to get past the kind of line that says that Google doesn’t want you to do it, it’s against their terms, and some people do bad things to hide their activity from Google. I have a little section on this below, but John did say:

from a competitive analysis point of view, I kinda see where it makes sense

But the ToS thing causes him problems when it comes to recommending tools:

how can we make sure that the tools that we recommend don’t suddenly start breaking our terms of service? It’s like how can we promote any tool out there when we don’t know what they’re gonna do next.

We’ve come a long way

It was nice to end with a nice shout out to everyone working hard around the industry, as well as a nice little plug for our conference [emphasis mine, obviously]:

I think in general, I feel the SEO industry has come a really long way over the last, I don’t know, five, ten years, in that there’s more and more focus on actual technical issues, there’s a lot of understanding out there of how websites work, how search works, and I think that’s an awesome direction to go. So, kind of the voodoo magic that I mentioned before, that’s something that I think has dropped significantly over time.

And I think that’s partially to all of these conferences that are running, like here. Partially also just because there are lots of really awesome SEOs doing awesome stuff out there.

Personal lessons from conducting an interview on stage

Everything above is about things we learned or confirmed about search, or about how Google works. I also learned some things about what it’s like to conduct an interview, and in particular what it’s like to do so on stage in front of lots of people.

I mean, firstly, I learned that I enjoy it, so I do hope to do more of this kind of thing in the future. In particular, I found it a lot more fun than chairing a panel. In my personal experience, chairing a panel (which I’ve done more of in the past) requires a ton of mental energy on making sure that people are speaking for the right amount of time, that you’re moving them onto the next topic at the right moment, that everyone is getting to say their piece, that you’re getting actually interesting content etc. In a 1:1 interview, it’s simple: you want the subject talking as much as possible, and you can focus on one person’s words and whether they are interesting enough to your audience.

In my preparation, I thought hard about how to make sure my questions were short but open, and that they were self-contained enough to be comprehensible to John and the audience, and allow John to answer them well. I think I did a reasonable job but can definitely continue practicing to get my questions shorter. Looking at the transcript, I did too much of the talking. Having said that, my preparation was valuable. It was worth it to have understood John’s background and history first, to have gathered my thoughts, and to have given him enough information about my main lines of questioning to enable him to have gone looking for information he might not have had at his fingertips. I think I got that balance roughly right; enabling him to prep a reasonable amount while keeping a couple of specific questions for on the day.

I also need to get more agile and ask more follow-ups and continuation questions – this is hard because you are having to think on your feet – I think I did it reasonably well in areas where I’d deliberately prepped to do it. This was mainly in the more controversial areas where I knew what John’s initial line might be but I also knew what I ultimately wanted to get out of it or dive deeper into. I found it harder where I found it less expected that I hadn’t quite got 100% what I was looking for. It’s surprisingly hard to parse everything that’s just been said and figure out on the fly whether it’s interesting, new, and complete.

And that’s all from the comfort of the interrogator’s chair. It’s harder to be the questioned than the questioner, so thank you to John for agreeing to come, for his work in the prep, and for being a good sport as I poked and prodded at what he’s allowed to talk about.

I also got to see one of his 3D-printed Googlebot-in-a-skirt characters – a nice counterbalance to the gender assumptions that are too common in technical areas:

Things John didn’t say

There are a handful of areas where I wish I’d thought quicker on my feet or where I couldn’t get deeper than the PR line:

“Kind of like Search Console”

I don’t know if I’d have been able to get more out of him even if I’d pushed, but looking back at the conversation, I think I gave up too quickly, and gave John too much of an “out” when I was asking about their internal toolset. He said it was “kind of like Search Console” and I put words in his mouth by saying “but better”. I should have dug deeper and asked for some specific information they can see about our sites that we can’t see in Search Console.

John can “kinda see where [rank tracking] makes sense”

I promised above to get a bit deeper into our rank tracking discussion. I made the point that “there are situations where this is valuable to us, we feel. So, yes we get Search Console data for our own websites, but we don’t get it for competitors, and it’s different. It doesn’t give us the full breadth of what’s going on in a SERP, that you might get from some other tools.”

We get questions from clients like, “We feel like we’ve been impacted by update X, and if we weren’t rank tracking, it’s very hard for us to go back and debug that.” And so I asked John “What would your recommendation be to consultants or webmasters in those situations?”

I think that’s kinda tricky. I think if it’s your website, then obviously I would focus on Search Console data, because that’s really the data that’s actually used when we showed it to people who are searching. So, I think that’s one aspect where using external ranking tracking for your own website can lead to misleading answers. Where you’re seeing well, I’m seeing a big drop in my visibility across all of these keywords, and then you look in Search Console in it’s like, well nobody’s searching for these keywords, who cares if I’m ranking for them or not?

From our point of view, the really tricky part with all of these external tools is they scrape our search results, so it’s against our terms of service, and one thing that I notice kind of digging into that a little bit more is a lot of these tools do that in really sneaky ways.

(Yes, I did point out at this point that we’d happily consume an API).

They do things like they use proxy’s on mobile phones. It’s like you download an app, it’s a free app for your phone, and in the background it’s running Google queries, and sending the results back to them. So, all of these kind of sneaky things where in my point of view, it’s almost like borderline malware, where they’re trying to take user’s computers and run queries on them.

It feels like something that’s like, I really have trouble supporting that. So, that’s something, those two aspects, is something where we’re like, okay, from a competitive analysis point of view, I kinda see where it makes sense, but it’s like where this data is coming from is really questionable.

Ultimately, John acknowledged that “maybe there are ways that [Google] can give you more information on what we think is happening” but I felt like I could have done a better job on pushing for the need for this kind of data on competitive activity, and on the market as a whole (especially when there is a Google update). It’s perhaps unsurprising that I couldn’t dig deeper than the official line here, nor could I have expected to get a new product update about a whole new kind of competitive insight data, but I remain a bit unsatisfied with Google’s perspective. I feel like tools that aggregate the shifts in the SERPs when Google changes their algorithm and tools that let us understand the SERPs where our sites are appearing are both valuable and Google is fixated on the ToS without acknowledging the ways this data is needed.

Are there really strong advocates for publishers inside Google?

John acknowledged being the voice of the webmaster in many conversations about search quality inside Google, but he also claimed that the engineering teams understand and care about publishers too:

the engineering teams, [are] not blindly focused on just Google users who are doing searches. They understand that there’s always this interaction with the community. People are making content, putting it online with the hope that Google sees it as relevant and sends people there. This kind of cycle needs to be in place and you can’t just say “we’re improving search results here and we don’t really care about the people who are creating the content”. That doesn’t work. That’s something that the engineering teams really care about.

I would have liked to have pushed a little harder on the changing “deal” for webmasters as I do think that some of the innovations that result in fewer clicks through to websites are fundamentally changing that. In the early days, there was an implicit deal that Google could copy and cache webmasters’ copyrighted content in return for driving traffic to them, and that this was a socially good deal. It even got tested in court [Wikipedia is the best link I’ve found for that].

When the copying extends so far as to remove the need for the searcher to click through, that deal is changed. John managed to answer this cleverly by talking about buying direct from the SERPs:

We try to think through from the searcher side what the ultimate goal is. If you’re an ecommerce site and someone could, for example, buy something directly from the search results, they’re buying from your site. You don’t need that click actually on your pages for them to actually convert. It’s something where when we think that products are relevant to show in the search results and maybe we have a way of making it more such that people can make an informed choice on which one they would click on, then I think that’s an overall win also for the whole ecosystem.

I should have pushed harder on the publisher examples – I’m reminded of this fantastic tweet from 2014. At least I know I still have plenty more to do.

Thank you to Mark Hakansson for the photos [close-up and crowd shot].

So. Thank you John for coming to SearchLove, and for being as open with us as you were, and thank you to everyone behind the scenes who made all this possible.

Finally: to you, the reader – what do you still want to hear from Google? What should I dig deeper into and try to get answers for you about next time? Drop a comment below or drop me a line on Twitter.

Interviewing Google’s John Mueller at SearchLove: domain authority metrics, sub-domains vs. sub-folders and more was originally posted by Video And Blog Marketing

Google Allo Shutting Down, Duo and Messages Support on the way

Last updated on

Google Allo Shutting Down, Duo and Messages Support on the way

2018 has been quite a year for Google, with their development of AI technology, a push for a mobile-friendly search, and some of their services shutting down. For the latter, Google has been shutting down a good amount of their services in an attempt to integrate their features into more comprehensive apps that aim to have enhanced functionality that would become the best option for users.

Another set of Google services shutting down is Google Allo. Google Allo is a messaging app that was launched in 2016 and was Google’s attempt to compete with the likes of WhatsApp, LINE, Viber, and Telegram. The app allows basic messaging, sending images and video, and was even the app that introduced Google Assistant to many users. Despite introducing Google’s popular AI assistant, Google Allo did not reach the heights that its competition was able to reach, which is why it would be shutting down on March 2019.

Along with Google Allo, the original Google Hangouts will also be shut down, with Google aiming to migrate that user base into Hangouts Meet and Hangouts Chat. Google Hangouts is one of Google’s most well-known apps and has even featured in popular media such as films. It was an essential tool that allows users to set up meetings much more efficiently and makes it easier to communicate without having to move to other apps.

With the shutting down of these two messaging services, Google will focus their efforts on Google Duo and Android Messages and create an improved messaging platform for Android users. Here are our thoughts on the impact of Google Allo, and how Google’ aims to optimize their messaging and communication.

Smoother Messaging on Google

Google has always been about optimizing their services to ensure that they provide the best user experience. With Allo and the original Hangouts slowly moving out of the picture, Google’s future plans for a more comprehensive messaging system on Android will now be able to progress.

Google Duo

The future set-up would consist of Android Messages, Google Duo, Hangouts Meet, and Hangouts Chat. This would ensure that users would be able to communicate much better as these systems have been integrated seamlessly. Messages would be the primary platform for messaging, while Google Duo allows video chat. Hangouts Meet and Chat are platforms for office communication, which works well for corporate meetings and internal communications. When it comes to Google Duo, the app has seen success, as it has a solid user base and functionality that works well.

Overall, the future looks good for messaging on Google. But the amount of messaging apps available may prove to be their biggest challenge, as most of these platforms have an established following. This will surely be a big challenge for Google in the upcoming years, but should they be successful, this will surely become another important Google staple.

Shutting Down Services

With Google Allo and the original Hangouts shutting down, this marks another one of Google’s services shutting down. In the past few months, Google Inbox and Google+ will also be shut down in 2019, with the former’s features being integrated into Gmail. The amount of services being shut down by Google is quite concerning, as platforms like Allo and Inbox were platforms that introduced new Google features that would become important today.

One of the main reasons why these services have been shut down was due to the low number of users. With the abundance of similar services, it is always a challenge to gather users to your platform, as most users would not leave services that they have grown accustomed to. Building up a user base requires a good amount of time, which means that it might take a while until Google’s new messaging services attain a solid user base.

Key Takeaway

Google has had a good amount of challenges over the past few months, and the shutting down of their services are their steps to continue improving their products for their users. I’m looking forward to seeing what the new messaging systems offer, and how they integrate into Android to create a more Google-centric ecosystem where all tools and services are seamlessly working together.

If you have questions and inquiries about Google services and SEO in general, leave a comment below and let’s talk.

Google Allo Shutting Down, Duo and Messages Support on the way was originally posted by Video And Blog Marketing

Understanding Google Rich Snippets


Rich Snippets have been all of the rage since their announcement back in 2009.

Since then, Rich Snippets have evolved into an essential component of search engine results.

Rich results can be seen on almost any SERP (search engine results page), displaying content like recipe information, star ratings for products and services, and much more.

Rich snippets, also known as structured data markup, are immensely popular for SEO.

And for good reason: they are eye catching, rich results that can help you improve your click through rate by displaying more key information.

But, what exactly are they? What types of rich snippets are there? Do they really help with SEO performance? How do you add them to your site?

We’re going to cover all of that in this post today.

Feel free to jump to a section:

What Are Rich Snippets In Google?

Rich snippets are search engine results displayed by Google that contain supplementary information beyond the normal search engine results page.

Rich snippets are shown by Google, but the information is pulled from websites using structured data markup.

To put this in plain english:

Websites write structured data in their HTML that Google can access for more information beyond a meta description and title tag.

Let me give you an example directly in search engine results.

Here is a normal search result for a keyword search on pumpkin pie recipes:

And here are a few results of the same search query but they are utilizing rich snippets:

Which one are you going to click on? Probably the one with the images, star ratings, recipe information, calories, and cook time all directly shown on the search results already.

These give you a clear-cut picture of what to expect before clicking.

These rich snippets can show for many types of search results…

Get Your Free Rich Snippet Worksheet

Enter your email and instantly get a Rich Snippet Worksheet to learn how to add it to your website today.

If you are human, leave this field blank.
Get Worksheet

What Are The Different Types Of Rich Snippets?

Rich snippets and structured data can be established for almost any type of website now.

Through Schema.org, you can view their entire list of structured data markup here.

According to Schema.org, the most commonly used structured data markups are:

  • Creative works: Creative Work, Book, Movie, Music Recording, Recipe, TV Series …
  • Embedded non-text objects: AudioObject, ImageObject, VideoObject
  • Event
  • Health and medical types
  • Organization
  • Person
  • Place
  • Local Business
  • Restaurant
  • Product, Offer, AggregateOffer
  • Review, AggregateRating

Conducting a simple Google search, you can notice rich snippets for almost anything.

Restaurant info

For instance, for restaurant information:

(Image Source)

Using structured data markup for their restaurant, this business can showcase:

  • Their menu in detail with all categories (appetizers, entrees, etc), pricing per dish, and detailed info on each individual menu item.
  • Business information, like address, hours, and phone number

Pretty awesome, right?


Another great example of rich snippets can be seen for events, like this card on the right-hand side of the search results for the San Francisco International Auto Show:

Beyond these, Google offers a bunch of ways to customize for rich snippets.

Top stories

For instance, utilizing rich snippets for Google AMP to showcase your blog and news content on the “Top stories” carousel in a given SERP:

(Image Source)


Do you have any books published? Use structured data for book actions to showcase all of your book data, from price to availability and more:

(Image Source)

Breadcrumb navigation

Another amazing feature of rich snippets is breadcrumb information:

(Image Source)

Breadcrumb information shows a potential visitor to your website where the page they are viewing fits in the site’s information hierarchy. This provides more context and can reduce bounce rates by meeting expectations before someone clicks.

Carousels for recipes

Carousels for recipes are another very popular feature of structured data markup that are critical to any recipes you want to share:

(Image Source)

With key information like ratings, nutrition, and prep time shown, you can entice users to click better than ever before.

Business information

Business information like phone numbers, founder info, and general “about” information can be displayed via the knowledge graph, a popular rich snippet type.

(Image Source)

Online courses

Online courses are extremely popular right now. If you are trying to promote yours, rich snippets can be a great way to offer more insight into the course:

(Image Source)

Sitelink information

This featured rich snippet type will display further information on your website in the form of additional relevant sitelinks:

For a full list of potential rich snippet types, check out Google’s full list here.

So, are these really worth it? Do they actually help SEO?

Let’s find out.

Get Your Free Rich Snippet Worksheet

Enter your email and instantly get a Rich Snippet Worksheet to learn how to add it to your website today.

If you are human, leave this field blank.
Get Worksheet

Do Rich Snippets Help SEO?

Do rich snippets help my search engine optimization??

Yes. Absolutely.

But remember: rich snippets are not a ranking signal.

Rich snippets are shown by Google when your website provides structured data that the search engine can then read and display on search engine results pages.

So while providing Google with structured data won’t give you a direct boost in rankings like backlinks and content would, rich snippets impact other signals that Google accounts for:

  1. Organic click through rates: because rich snippets provide so much more information like pictures, reviews, star ratings, etc, you can vastly improve CTR, especially in SERPs that have very few rich snippets already. CTR improves your ranking as a user behavior signal.
  2. Reduced bounce rates: rich snippets help to provide more information and context, giving searchers a preview of your site before they click, rather than short meta descriptions. This often results in reduced bounce rates because expectations are clear. Bounce rates, while not a direct ranking signal, are a user behavior signal.

Rich snippets aren’t direct ranking factors. But they sure help SEO by impacting user behavior signals that Google takes into account.

Let’s get started with adding rich snippets and structured data to your site today.

How Do You Add Rich Snippets To Your Website?

Now that you have the basics of structured data and rich snippets under your belt, how do you add it to your website?

A few common ways are:

  • Use a tool, like a WordPress plugin or third party service
  • Manually add structured data markup to your existing HTML
  • Hire a developer to produce the structured data markup
  • Use Google’s structured data tool

If you already have a developer that can produce schema markup for your website (or are planning on hiring one), that’s great.

But if you don’t, you have two main options:

  1. Use a tool / plugin
  2. Build it via Google’s own tool and use the new code you generate.

If you want to use a plugin / tool, visit the next section. We recommend you start using Google’s Structured Data Markup tool.

But before you dive head-first into structured data, it’s important to understand a few key concepts about what it is you are actually editing and tagging here.

That way, when you create your finalized HTML code, you will understand what the new elements are and how to spot potential errors.

The three main factors of schema.org markup for rich snippets are:

1. Itemscope:

Itemscope code is inputted into your structured data to signify the start and end of your structured data markup:

Insert code here

This lets Google know where to look for structured markup to feature in rich snippets.

2. Itemtype:

Itemtype code is placed directly after the start of your itemscope code, like so:


Insert code here

This specifies that your code is a specific type of structured data markup.

3. Itemprop:

Itemprop, or item properties, gives additional information to search engines on the structured data you are looking to provide.

For instance, more movie information like directors names.

Here is a sample code straight from Schema.org on how this would look in practice:


Director: itemprop=”director”>James Cameron (born August 16, 1954)
itemprop=”genre”>Science fiction

Notice how this complete code features all three functions: itemscope, itemtype, and itemprop.

Now, I’ll show you a quick example of building out structured data and how to put these three pieces to work.

Start Google’s Markup Helper tool and choose which type of snippet you want to create:

In this case, let’s choose the “Articles” type and use the Higher Visibility blog as an example.

Select your type, and then you can choose to create it from a live URL of your site, or the code on a specific page.

If you are brand new to structured data, stick with the URL option.

Get Your Free Rich Snippet Worksheet

Enter your email and instantly get a Rich Snippet Worksheet to learn how to add it to your website today.

If you are human, leave this field blank.
Get Worksheet

In this case, we input our latest blog post as the URL. The tool will load our page and give us options to start adding the necessary tags:

Depending on which structured data type you choose (we chose Articles, in this case), you will get different tag options that you must complete for a full rich snippet.

For example, choosing the Articles type, you will have to tag your page with the following items:

  • Name
  • Author
  • Date published
  • Image
  • Article section
  • Article body
  • URL
  • Publisher
  • Aggregate rating (if applicable)

Highlight different sections of your page to tag them with the proper information, like dates, publisher, author, etc.

Once you have fully tagged the page you want to add a rich snippet for, click “Create HTML”:

Next, copy and paste the structured data HTML over your existing page HTML on your website.

That’s it!

Now you can repeat this process for multiple types of structured data for your website.

Tools And Plugins That Help In Creating, Adding, And Testing Rich Snippets

Creating rich snippets from structured data on your own can be tiresome if you have never done it before.

Writing the code snippet requires a bit of work in itself, as well as some testing, editing, and reworking to get it right.

That’s where tools come into play: they can help smooth the process out, even helping you create rich snippets from product or website information without writing code.

Here are a few great tools for both creating rich snippets and testing them for quality and accuracy.

Tools for creating and adding rich snippets

Writing snippets of code isn’t for everyone. And if you don’t have a developer than can do it for you, it can be very time consuming.

Here are some easy ways to build structured data markup for your website without learning to write the code itself.

WordPress: WP SEO Structured Data Schema

If your websites CMS is based on WordPress, WP SEO Structured Data Schema is a plugin that you will want to explore:

Installing directly as a plugin, you don’t need to worry about copy and pasting any new code to your website.

Making changes within the plugin will alter your code for you, saving you the time and legwork of writing and testing.

With built-in data fields, you simply enter the information for your business and the plugin does the structured markup for you:

(Image Source)

If you are in a time-crunch and can’t do the code writing on your own, this is a great alternative.

Google’s Structured Data Markup Helper

Google has its own structured data tool, the Markup Helper:

This tool works directly on your website to help you build structured data. Best of all, it’s free and offers a wide variety of schema types. Just select your structured data type and use the tool to select each piece of data.

Tools for testing your rich snippets

Testing your rich snippets is a key step in the process of developing them. If you have errors, poor markup, or wrong code, you won’t get featured as a rich snippet on Google search results.

Here are a few great tools that anyone can use to test their snippets.

Google’s Structured Data Testing Tool

Google’s free Structured Data Testing Tool is a fantastic way to make sure your code is up to standards.

Within the tool, you can test both a live web page and a code snippet:

For instance, you can start by inputting your code snippet and testing it for errors. If it comes up clean, input that code on your website.

Then, head back to the testing tool and type in your live URL for another test.

Once both have passed, you can be sure that your structured data is on point!


Rich snippets, or structured data, are great ways to boost your SEO performance indirectly.

Rich snippets can be confusing or technical at first glance, but using the tools and plugins we listed, you can get creating, adding, and testing your structured data in just minutes.

Rich snippets are a proven way to increase your organic click through rates and reduce bounce rates.

Both of which are user behavior signals that Google monitors.

Before you go, don’t forget to download our Rich Snippet Worksheet!

Get Your Free Rich Snippet Worksheet

Enter your email and instantly get a Rich Snippet Worksheet to learn how to add it to your website today.

If you are human, leave this field blank.
Get Worksheet

Understanding Google Rich Snippets was originally posted by Video And Blog Marketing

SE Ranking Website Audit Review

Last updated on

SE Ranking Website Audit Review

Running a website audit is an extremely important task. It is a lengthy process that requires diligence and often involves attentively scanning through dozens, if not hundreds, of web pages to find weaknesses and areas for improvement. We’d all be spending hours and hours manually checking each web page if it weren’t for website audit tools. Luckily, there are plenty of great ones out there.

We are always looking out for awesome tools that can facilitate our work in terms of SEO, and we have previously worked with such website auditors as Woorank, Google Lighthouse, and SEO Powersuite. However, I want to note that there aren’t any SEO tools that can give you a 100% guarantee that your online performance will improve. It’s up to you to add the final human touch of creativity to stand out.

Another one of such quality website audit tools is part of SE Ranking’s tool package, which we have previously reviewed. Not only does this website audit tool quickly find and identify website errors, but it also creates a list of technical issues for web developers, content writers, and web designers. Before we start the review, we invite you and try the tool out for yourself by signing up for a free trial.

Website Audit

SE Ranking SEO Hacker

Once you finish adding your website as a project by following the steps outlined here under the Starting up section, the system will automatically start running an audit of the added website. Depending on how many pages the website has, it may take some time for the tool to complete the audit.

When running an SEO audit, you can monitor the crawling process as it happens, view the completion percentage, and the queue position, if there is more than one website in your account. On top of that, you can speed up the audit by configuring the maximum number of pages, the scanning depth and the number of requests per second.

The Report Section

The Report Section

Once the audit has been completed, you can look at the results by accessing the Report section. Once you perform more than one audit of the same website, a graph that shows the dynamics of changes will appear, letting you know if you’re on the right track. 

As you scroll down the Report page, you can see information about the website’s domain characteristics that shows the location of the site’s server and the site’s expiration date, SEO Metrics that display the site’s Moz DA and number of backlinks, and Index status that demonstrates how many web pages are indexed in Google, Bing, and Yahoo.

Further down on the Report page, you can find detailed information about such analyzed parameters as the website’s Health check, Page, Meta, Content, Link and Image analysis, Optimization tips, Usability, and Technology.

Report page

The Health check analyzes the site’s main tech features, like site mirror, HTTP to HTTPS redirects, robots.txt, XML sitemap, frames, flash, duplicated pages, etc.

Pages Analysis

The Page analysis provides information regarding on-page errors, such as URL length, pages blocked by robots.txt, page size, pages with noindex meta tags, rel=”canonical”, rel=”alternate”, page redirect, to name a few.

Pages Analysis 2

The Meta analysis summarizes page titles and descriptions, checks their uniqueness on the site, looks at whether they comply with restrictions on the number of characters, and highlights duplicates.

Meta analysis

Under Content analysis, you can easily see if you have any duplicate content, if you have pages that are too big or too small, plus look through the status of the HTML headings.

Content analysis

The Link analysis enables you to monitor each page’s inbound and outbound links. Here you can also get recommendations about the use of nofollow tags in links, how you can improve anchor texts for key queries, etc.

Link analysis

The Image analysis points out images that don’t have Alt texts as well as images with 4xx and 5xx statuses.

Image analysis

By clicking on Optimization, you will see general optimization tips for mobile and desktop versions of your website. In addition, you will see errors regarding the use of JavaScript and CSS in above-the-fold content, tips on compression and minifying HTML, CSS, and JS, and more.


Last but not least, the Usability and Technology analysis checks for the presence of branded favicons, correct markup, a custom 404 error page, in addition to providing an analysis of the loading speed and website security.

Generating an XML Sitemap

Generating an XML Sitemap

Through the XML sitemap feature, you can enable search engine crawlers to find the list of pages that you want to index. As you are generating a sitemap, you are free to choose the type of pages you want to include, as well as specify the page change frequency, and various crawling depth priorities.

The Crawled Pages Section

 The Crawled Pages Section

The Crawled Pages section shows you all of the site pages, external links, and images that were crawled. Moreover, you can find their analysis against key SEO parameters. The analysis of each separate page guarantees that you don’t miss any warnings. Plus, the platform highlights the parameter that needs to be fixed, if an error is found.

Website Audit

The Crawled pages, External links and Crawled images subsections are equipped with filters that enable you to conveniently work with a selection of necessary data. For instance, you can easily sort pages by error type and work with those pages only. Moreover, you can create filters for one or several parameters, and export the results in the .xls file format.

External Links

External Links

Here you can see a list of website links that lead to external resources, as well as the results of their analysis against such parameters as the server response, presence of the nofollow tag, anchor text, crawl depth, a web page that links out to an external resource, etc.

Crawled Images

Crawled Images

In this subsection, you can find all the images located on your website. You can also see the results of their analysis against key image-related parameters, like server response, Alt text, size, and so on.

The Compare Crawls Section

The Compare Crawls Section

Once two or more audits of the same website have been completed, you can select the audit dates and compare their results to learn what improved and what got worse.

The Page changes monitoring Section

The Page changes monitoring Section

Under the Page changes monitoring subsection, you can select pages, track and get notified of any change that occurs on all important page elements. This feature gives you instantaneous notice when any alteration takes place.

The Settings Section

The Settings Section

The Settings subsection gives you the liberty to customize your audits by allowing you to set convenient crawling conditions, specify the audit frequency, limits and restrictions, and upload your own lists of pages to be audited.



Here you can create a schedule that will tell the platform when you want to run audits on your website. In addition to being able to set the audit date and time, you can set the frequency settings to weekly, monthly or manually (run the audit when it is convenient for you).

Source of pages for audit

Source of pages for audit

This subsection of the Settings section, gives you the opportunity to choose the pages that you want the system to crawl. You are free to choose all website pages and mark them for Google or Yandex bots. You can also include or exclude subdomains, or crawl only those pages that are in the XML sitemap. Of course, you are free to upload your own list of pages for manual crawling.

Rules for scanning pages

Rules for scanning pages

Here you can select specific rules for crawling your web pages or create them on your own. For example, you can specify whether to take robots.txt directives into account or to ignore some URL parameters. Additionally, you can exclude all link variable values or conveniently set the exclusion parameters that you want.

Parser settings

Parser settings

Choose a crawling bot or provide access to pages that are blocked for web crawlers in the Parser settings subsection.

Limits and restrictions

Limits and restrictions

Set the maximum crawling depth, the number of queries per second, as well as the number of web pages to be crawled according to your data plan under the Limits and restrictions subsection. If you have more than one site under your account, you can set different limits for each site.

Report setup

Report setup

When running an audit of website parameters, the platform relies on current search engine recommendations.

In the Report setup subsection, you can change the parameters that are taken into account by the platform when crawling sites and compiling reports (e.g. the length of the title meta tag or the maximum number of redirects).

On-page SEO Audit

 On-page SEO Audit

As a bonus feature to the Website Audit, SE Ranking offers to run audits for specific web pages to provide a breakdown of deeper insights.

The on-page SEO audit highlights errors that are negatively impacting the page’s ranking in search engines, ranging them from petty to critical. SE Ranking checks how SEO-friendly a particular page is in relation to the given search query.


SE Ranking is one of the best website audit tools that you can use to analyze your website from an SEO perspective. It empowers you with accurate data, detailed analytical reporting, as well as extra tools, such as the On-page SEO audit, that help you perform more tasks with greater efficiency. SE Ranking’s comprehensiveness, dependability, and multi-purposeness make this a must-have tool for any SEO professional’s toolbox.

Key Takeaway

Website and on-page SEO audits can make a world of difference in determining the success of your website in terms of SEO. SE Ranking is hands-down one of the best and versatile tools out there. With the large pack of powerful tools and features it supports, we would definitely recommend you employ this tool in your work.

Don’t hesitate to get in touch with me if you have any SEO-related questions or feedback.

“Marya Kazakova is a marketing specialist at SE Ranking. She likes sharing her experience in outreach
marketing, link building, content marketing, and SEO with readers. You
can contact her on Linkedin.

SE Ranking Website Audit Review was originally posted by Video And Blog Marketing

Low-Hanging Fruit for the Holiday Season: Four simple marketing changes with significant impact

Testing and optimization can be difficult — from the challenges of deciding what to test amidst a dizzying array of priorities and ideas, to the time-intensive manual labor of implementing sophisticated back-end changes. With the year coming to a close and big marketing plans in the making, it’s important before making big changes and commitments to be sure that you have the right foundation to maximize the return from your grander strategy. That’s why we created this list of simple changes that can produce significant results and set your marketing strategy for the next stage on the right foundation.

20+ years and over 20,000+ sales paths have taught us that one of the foundational principles in which all marketing should be grounded is that marketers must always be at war with the temptation to prioritize company logic over customer logic. Over time, we grow so familiar with our product, our process, our brand and our own objective that we risk severe and expensive assumptions about what the customer wants and needs to know to make a purchase decision.

The goal of optimization is not to make changes to a page — but to make changes in the mind of the customer. Changing even a few words can alter the conclusion formed by the customer depending upon their levels of motivation and expectation. This means that even minor changes to our message can produce radical lifts in performance, as we have seen in thousands of experiments time and time again. So here are some simple, easy-to-implement ways you can shift from communicating company logic to customer logic and optimize the thought-sequence of your offer:

  1. Headlines — From hype to conversation

“Headlines are first impressions, pick-up lines. Use buzz/power-words, use numbers, make it value-first, make a promise, etc.,” are all ideas espoused often by some successful marketers. While some of these might be good ideas of how you could write a headline, they often leave us asking, “Why should I use this tactic over another?” When and how we deploy our tactics is determined by our objective, and all communication should be grounded in an understanding of your audience and a rationale for why.

Any idea might be considered a good or bad one until you have a purpose against which to evaluate it. Years of incrementally testing and refining research questions have demonstrated that a headline has at least two fundamental, primary purposes: 1) To capture attention, and then 2) convert it to interest. There are dozens of ways an effective headline could be crafted, but ultimately, it should be measured by how much attention it captures (from the right people) and how effectively that converts to a committed interest

  1. Copy — From marketer-value to customer-value

While variables like long copy versus short copy, or hero imagery, or ideal eye-path structure and prioritization of value are questions which can only be truly answered through testing and understanding, one universal mistake often made is failing to translate generalized claims and specific features about our product or service into clear benefits to the consumer. The customer is never simply choosing which product to buy, but also which product from who, how and when. It is critical to understand that your offer and the consequent micro-decisions required of the customer are always perceived in the context of their competing alternatives.

A simple but fun and effective question that MECLABS founder Flint McGlaughlin says should be applied to every marketing claim is, “So what?” That is to say, the customer is always asking, and we must always be answering the question: Why should I do what you want me to do rather than anything else right now?

“So what that you’re an industry leader?”

“So what that your product has these specifications?”

“So what that you offer a personalized solution, customer-service or integrated functionality”

On any given website, customers often expect to find words like “most,” “best,” “fastest,” “trusted,” “leader,” “all” and “customer-first.” Qualifying claims like this carry no measurable weight and, ironically, set a precedent of distrust unless somehow validated. Customers want to believe you, so you must give them reasons by clarifying your qualifying claims with measurable evidence. Quantify and specify wherever possible and appropriate so that your customers have no need to question the credibility of your claims, and they will trust you when you make others.

  1. Images — From irrelevant art to relevant messaging

Images are not only highly valuable real-estate but one of the marketer’s most effective tools for guiding the eye path. Yet so often, images are chosen based on personal opinion, the design department’s decision, how it looks and feels on the page, or its color scheme, cleverness, or worse, simply because it’s supposed to be there. Images, like each and every element in your marketing funnel, are part of and should contribute to the overall value proposition of your organization/solution.

When used properly, images are not merely decorative accents to liven a webpage’s personality; they should illustrate or support the core marketing message, and therefore be measured primarily by relevance and clarity. Ultimately, your core message (your value proposition) should be supported 1) Continuously, and 2) Congruently.

Continuity – The Continuity principle posits that your value proposition should be stated or supported continuously throughout each step of your sales process.

Congruence – The Congruence Principle posits that each element of your page or collateral should either state or support your value proposition (this is particularly relevant for imagery).

  1. Objectives — From multiple options to the primary focus

“What is the goal of this page or email?” A question we’ve asked countless times when working with marketers and organizations, and we’ve found surprisingly often that either the goal of the page is unclear or attempting to fulfill numerous goals other than its primary purpose. Since ideally each element of your page should move the target customer toward the “macro-yes” — conversion. Each distraction we place in the customers’ path risks leading them into tangential and unsupervised thinking rather than a controlled thought-sequence toward the objective.

The objective of the page is the benchmark against which we measure the relevance and efficacy of all the supporting elements. Avoiding things like evenly weighted calls-to-action, distracting images, competing ads and irrelevant page elements streamlines the customers’ path toward the objective. Clearly defining the action you want the customer to take and stripping away unnecessary elements to organize around the objective can be powerfully impactful in the psychology of the consumer.

Together, each one these subtle shifts in communication can produce outstanding lifts when executed well and set your messaging on the right foundation. We hope that you’ll find the same amazing results from becoming more customer-oriented that we have seen from testing these core principles time and time again.

In the meantime, Happy Holidays from MarketingExperiments!

Related Resources

Design Hypotheses that Win: A 4-Step Framework for Gaining Customer Wisdom and Generating Significant Results (register for the free A/B Testing Summit online conference and hear Flint McGlaughlin’s keynote session)

Ecommerce: 6 takeaways from a 42-day analysis of major retailers’ holiday marketing

Email Marketing: Last-minute holiday deals preview wins with customer-centric approach

Increase Mobile Conversion Rates (free micro course from MECLABS Institute)

Low-Hanging Fruit for the Holiday Season: Four simple marketing changes with significant impact was originally posted by Video And Blog Marketing

Optimizing your Videos for SEO in 2019

Last updated on

Optimizing your Videos for SEO in 2019

Video has become perhaps the most popular form of content on the internet, with social media and video streaming websites boasting high traffic numbers. With increasingly fast internet speed, and mobile becoming the platform of choice for browsing, video content has become more accessible to users worldwide.

This is evident in streaming sites like YouTube and Twitch, where videos about people playing games and vlog updates that chronicle everyday life and events have become a popular form of entertainment. In fact, it can be argued that online video has become more widespread than television nowadays, as streaming numbers tend to reach billions on a regular basis.

This makes doing video SEO more important than ever, as you would want to ensure that your videos remain searchable, allowing more views and traffic. With the abundance of choice becoming much wider, it is best to ensure that your videos would be the ones that show up more often. It can be a challenge to stand out amongst the millions of videos, but these tips would be able to help you give that much-needed boost that would drive more views.

Update older videos

A good first step in optimizing your videos for SEO is to take a look at your older video content. This would help you see what kinds of videos have been successful, along with videos that might be able to gain more traffic despite them being older. This means updating the title and description to include more searchable terms and keywords and even adding relevant links and tags that you might have missed out on before.

On YouTube, updating your video content is pretty simple to do as the editing system makes the process smooth and hassle-free. This is similar to updating older blogs to keep up with current content. While updating the videos themselves might not be an option, adding annotations and links to an update video would not only bring in new viewers but also lead them into newer content.

Optimize quality

One of the best reasons why people view content is because of how well-made they are. In YouTube, the videos that get the most views are from channels that commit to putting in quality in their production. This means having good audio and video quality, along with solid graphics and images. Creating quality videos is something that may not happen on the very first video but investing your time and effort would bring in better-looking videos that audiences would enjoy.

High Quality Video

While some people might think that you would need professional help to create quality videos, it can be done for much less, especially with much of the things that you need to know available on the internet. Content delivery is one of the most crucial elements of driving more views and traffic and making sure that you are able to refine it are ingredients for success.

Social Media Sharing and Live Video

Social media websites have become the best place to share content on the internet. From Twitter to Facebook, sharing content opens up more opportunities to expand your audience and drive more traffic into a website. Over the past few years, Facebook has been optimizing their website to become a more viable video platform, and that saw a rise in views and led to more users. Other than Facebook, Twitter and Instagram have also made video content more accessible in their platforms, with the latter introducing the new IGTV, which is a video platform somewhat similar to YouTube.


Live video has also become more popular, with mobile users having the ability to quickly set up videos in just a few minutes. This makes creating viral social media campaigns using video content a much easier task, allowing for more views and traffic overall. A lot of major brands have been making use of video content and taking a look at these would help you create quality videos, and also have an idea of what your audience would like to see in their videos.

Integrate them into the rest of your content

Blog content has always been one of the best ways to drive organic traffic, and this also means that integrating video content can be something that would help both of your platforms gain more traffic. Adding videos add a new dimension to your content, allowing users to learn more about a certain topic, or even bridge different topics together. Blogs and videos allow more users to discover new websites, allowing more traffic and building more potential connections that can prove to be beneficial.

Game Review and Video

A lot of users want more interactivity in their content and adding videos to give them another dimension while reading. For example, reading a video game review article also has a link to a review video, allowing you to get to analyze the topic from a different perspective. Along with optimizing the user experience, you would also be able more views to your videos, along with bringing in traffic for your articles.

Key Takeaway

Video content is going to become much bigger in 2019, and it is best to ensure that you would be ready to take advantage. With these steps, you would be able to create quality content that would drive more views and traffic to your websites and channels.

If you have questions and inquiries about video content or SEO, leave a comment below and let’s talk.

Optimizing your Videos for SEO in 2019 was originally posted by Video And Blog Marketing

An introduction to HTTP/2 for SEOs

In the mid 90s there was a famous incident where an email administrator at a US University fielded a phone call from a professor who was complaining his department could only send emails 500 miles. The professor explained that whenever they tried to email anyone farther away their emails failed — it sounded like nonsense, but it turned out to actually be happening. To understand why, you need to realise that the speed of light actually has more impact on how the internet works than you may think. In the email case, the timeout for connections was set to about 6 milliseconds – if you do the maths that is about the time it takes for light to travel 500 miles.

We’ll be talking about trucks a lot in this blog post!

The time that it takes for a network connection to open across a distance is called latency, and it turns out that latency has a lot to answer for. Latency is one of the main issues that affects the speed of the web, and was one of the primary drivers for why Google started inventing HTTP/2 (it was originally called SPDY when they were working on it, before it became a web standard).

HTTP/2 is now an established standard and is seeing a lot of use across the web, but is still not as widespread as it could be across most site. It is an easy opportunity to improve the speed of your website, but it can be fairly intimidating to try to understand it.

In this post I hope to provide an accessible top-level introduction to HTTP/2, specifically targeted towards SEOs. I do brush over some parts of the technical details and don’t cover all the features of HTTP/2, but my aim here isn’t to give you an exhaustive understanding, but instead to help you understand the important parts in the most accessible way possible.

HTTP 1.1 – The Current Norm

Currently, when request a web page or other resource (such as images, scripts, CSS files etc.), your browser speaks HTTP to a server in order to communicate. The current version is HTTP/1.1, which has been the standard for the last 20 years, with no changes.

Anatomy of a Request

We are not going to drown in the deep technical details of HTTP too much in this post, but we are going to quickly touch on what a request looks like. There are a few bits to a request:

The top line here is saying what sort of request this is (GET is the normal sort of request, POST is the other main one people know of), and what URL the request is for (in this case /anchorman/) and finally which version of HTTP we are using.

The second line is the mandatory ‘host’ header which is a part of all HTTP 1.1 requests, and covers the situation that often a single webserver may be hosting multiple websites and it needs to know which are you are looking for.

Finally there will a variety of other headers, which we are not going to get into. In this case I’ve shown the User Agent header which indicates which sort of device and software (browser) you are using to connect to the website.

HTTP = Trucks!

In order to help explain and understand HTTP and some of the issues, I’m going to draw an analogy between HTTP and … trucks! We are going to imagine that an HTTP request being sent from your browser is a truck that has to drive from your browser over to the server:

A truck represents an HTTP request/response to a server

In this analogy, we can imagine that the road itself is the network connection (TCP/IP, if you want) from your computer to the server:

The road is a network connection – the transport layer for our HTTP Trucks

Then a request is represented by a truck, that is carrying a request in it:

HTTP Trucks carry a request from the browser to the server

The response is the truck coming back with a response, which in this case is our HTML:

HTTP Trucks carry a response back from the server to the browser

“So what is the problem?! This all sounds great, Tom!” – I can hear you all saying. The problem is that in this model, anyone can stare down into the truck trailers and see what they are hauling. Should an HTTP request contain credit card details, personal emails, or anything else sensitive anybody can see your information.

HTTP Trucks aren’t secure – people can peek at them and see what they are carrying


HTTPS was designed to combat the issue of people being able to peek into our trucks and see what they are carrying.

Importantly, HTTPS is essentially identical to HTTP – the trucks and the requests/responses they transport at the same as they were. The response codes and headers are all the same.

The difference all happens at the transport (network) layer, we can imagine it as a over our road:

In HTTPS, requests & responses are the same as HTTP. The road is secured.

In the rest of the article, I’ll imagine we have a tunnel over our road, but won’t show it – it would be boring if we couldn’t see our trucks!

Impact of Latency

So the main problem with this model is related to the top speed of our trucks. In the 500-mile email introductory story we saw that the speed of light can have a very real impact on the workings of the internet.

HTTP Trucks cannot go fast than the speed of light.

HTTP requests and many HTTP responses tend to be quite small. However, our trucks can only travel at the speed of light, and so even these small requests can take time to go back and forth from the user to the website. It is tempting to think this won’t have a noticeable impact on website performance, but it is actually a real problem…

HTTP Trucks travel at a constant speed, so longer roads mean slower responses.

The farther the distance of the network connection between a user’s browser and the web server (the length of our ‘road’) the farther the request and response have to travel, which means they take longer.

Now consider that a typical website is not a single request and response, but is instead a sequence of many requests and responses. Often a response will mean more requests are required – for example, an HTML file probably references images, CSS files and JavaScript files:

Some of these files then may have further dependencies, and so on. Typically websites may be 50-100 separate requests:

Web pages nowadays often require 50-100 separate HTTP requests.

Let’s look at how that may look for our trucks…

Send a request for a web page:

We send a request to the web server for a page.

Request travels to server:

The truck (request) may take 50ms to drive to the server.

Response travels back to browser:

And then 50ms to drive back with the response (ignoring time to compile the response!).

The browser parses the HTML response and realises there are a number of other files that are needed from the server:

After parsing the HTML, the browser identifies more assets to fetch. More requests to send!

Limit of HTTP/1.1

The problem we now encounter is that there are several more files we need to fetch, but with an HTTP/1.1 connection each road can only handle a single truck at a time. Every HTTP request needs its own TCP (networking) connection, and each truck can only carry one request at a time.

Each truck (request) needs its own road (network connection).

Furthermore, building a new road, or opening a new networking connection also requires a round trip. In our world of trucks we can liken this to needing a stream roller to first lay the road and then add our road markings. This is another whole round trip which adds more latency:

New roads (network connections) require work to open them.

This means another whole round trip to open new connections.

Typically browsers open around 6 simultaneous connections at once:

Browsers usually open 6 roads (network connections).

However, if we are looking at 50-100 files needed for a webpage we still end up in the situation where trucks (requests) have to wait their turn. This is called ‘head of line blocking’:

Often trucks (requests) have to wait for a free road (network connection).

If we look at the waterfall diagram for a page (this example this HTTP/2 site) of a simple page that has a CSS file and lot of images you can see this in action:

Waterfall diagrams highlight the impact of round trips and latency.

In the diagram above, the orange and purple segments can be thought of as our stream rollers, where new connections are made. You can see initially there is just one connection open (line 1), and another connection being opened. Line 2 then re-uses the first connection and line 3 is the first request over the second connection. When those complete lines 4 & 5 are the next two images.

At this point the browser realises it will need more connections so four more are opened and then we can see requests are going in batches of 6 at a time corresponding with the 6 roads or network connections that are open.

Latency vs Bandwidth

In the waterfall diagram above, each of these images may be small but each requires a truck to come and fetch it. This means lots of round trips, and given we can only run 6 simultaneously at a time there is a lot of time spent with requests waiting.

It is sometimes difficult to understand the difference between bandwidth and latency. Bandwidth could be thought of as the load capacity of our trucks, where each truck could carry more. This often doesn’t help with webpage times though, given each request and response cannot share a truck with another request. This is why it has been shown that increasing bandwidth has a limited impact on the load time of pages. This was shown in research conducted by Mike Belshe at Google which is discussed in this article from Googler Ilya Grigorik:

The reality was clear that in order to improve the performance of the web, the issue of latency would need to be addressed. The research above was what led to Google developing the SPDY protocol which later turned into HTTP/2.

Improving the impact of latency

In order to improve the impact that latency has on website load times, there are various strategies that have been employed. One of these is ‘sprite maps’ which take lots of small images and jam them together into single files:

Sprite maps are a trick used to reduce the number of trucks (requests) needed.

The advantage of sprite maps is that they can all be put into one truck (request/response) as they are just a single file. Then clever use of CSS can display just the portion of the image that corresponds to the desired image. One file means only a single request and response are required to fetch them, which reduces the number of round trips required.

Another thing that helps to reduce latency is using a CDN platform, such as CloudFlare or Fastly, to host your static assets (images, CSS files etc. – things that are not dynamic and the same for every visitor) on servers all around the world. This means that the round trips for users can be along a much shorter road (network connection) because there will be a nearby server that can provide them with what they need.

CDNs have servers all around the world, can make the required roads (network connections) shorter.

CDNs also provide a variety of other benefits, but latency reduction is a headline feature.

HTTP/2 – The New World

So hopefully, you have now realised that HTTP/2 can help reduce latency and dramatically improve the performance of pages. How does it go about it!

Introducing Multiplexing – More trucks to the rescue!

With HTTP/2 we are allowed multiplexing, which essentially means we are allowed to have more than one truck on each road:

With HTTP/2 a road (network connection) can handle many trucks (requests/responses).

We can immediately see the change in behaviour on a waterfall diagram – compare this with the one above (not the change in the scale too – this is a lot faster):

We now only need one road (connection) then all our trucks (requests) can share it!

The exact speed benefits you may get depend on a lot of other factors, but by removing the problem of head of line blocking (trucks having to wait) we can immediately get a lot of benefits, for almost no cost to us.

Same old trucks

With HTTP/2 our trucks and their contents stay essentially the same as they they always were, we can just imagine we have a new traffic management system.

Requests look as they did before:

The same response codes exist and mean the same things:

Because the content of the trucks doesn’t change, this is great news for implementing HTTP/2 – your web platform or CMS does not need to be changed and your developers don’t need to write any code! We’ll discuss this below.

Server Push

A much anticipated feature of HTTP/2 is ‘Server Push’ which allows a server to respond to a single request with multiple responses. Imagine a browser requests an HTML file but the server knows that that means the server will need a specific CSS file and a specific JS file as well. Then the server can just send those straight back, without needing them to be requested:

Server Push: A single truck (request) is sent…

Server Push: … but multiple trucks (responses) are sent back.

The benefit is obvious- it removes another whole round trip for each resource that the server can ‘anticipate’ that the client will need.

The downside is that at the moment this is often implemented badly, and it can mean the server sends trucks that the client doesn’t need (as it has cached the response from earlier) which means you can make things worse.

For now, unless you are very sure you know what you are doing you should avoid server push.

Implementing HTTP/2

Ok – this sounds great, right? Now you should be wondering how you can turn it on!

The most important thing is to understand that because the requests and responses are the same as they always were, you do not need to update the code on your site at all. You need to update your server to speak HTTP/2 – and then it will do the new ‘traffic management’ for you.

If that seems hard (or if you already have one) you can instead use a CDN to help you deploy HTTP/2 to your users. Something like CloudFlare, or Fastly (my favourite CDN – it requires more advanced knowledge to setup but is super flexible) would sit in front of your webserver and speaking HTTP/2 to your users:

A CDN can speak HTTP/2 for you whilst your server speaks HTTP/1.1.

Because the CDN will cache your static assets, like images, CSS files, Javascript files and fonts, you still get the benefits of HTTP/2 even though your server is still in a single truck world.

HTTP/2 is not another migration! 

It is important to realise that to get HTTP/2 you will need to already have HTTPS, as all the major browsers will only speak HTTP/2 when using a secure connection:

HTTP/2 requires HTTPS

However, setting up HTTP/2 does not require a migration in the same way as HTTPS did. With HTTPS your URLs were changing from http://example.com to https://example.com and you required 301 redirects, and a new Google Search Console account and a week long meditation retreat to recover from the stress.

With HTTP/2 your URLs will not change, and you will not require redirects or anything like that. For browsers and devices that can speak HTTP/2 they will do that (it is actually the guy in the steamroller who communicates that part – but that is a-whole-nother story..!), and other devices will fall back to speaking HTTP/1.1 which is just fine.

We also know that Googlebot does not speak HTTP/2 and will still use HTTP/1.1:


However, don’t despair – Google will still notice that you have made things better for users, as we know they are now using usage data from Chrome users to measure site speed in a distributed way:


This means that Google will notice the benefit you have provided to users with HTTP/2, and that information will make it back into Google’s evaluation of your site.

Detecting HTTP/2

If you are interested in whether a specific site is using HTTP/2 there are a few ways you can go about it.

My preferred approach is to turn on the ‘Protocol’ column in the Chrome developer tools. Open up the dev tools, go to the ‘Network’ tab and if you don’t see the column then right click to add it from the dropdown:

Alternatively, you can install this little Chrome Extension which will indicate if a site is using it (but won’t give you the breakdown for every connection you’ll get from doing the above):


Slide Deck

If you would prefer to consume this as a slide deck, then you can find it on Slideshare. Feel free to re-use the deck in part or its entirety, provided you provide attribution (@TomAnthonySEO):

Wrap Up

Hopefully, you found this useful. I’ve found the truck analogy makes something, that can seem hard to understand, somewhat more accessible. I haven’t covered a lot of the intricate details of HTTP/2 or some of the other functionality, but this should help you understand things a little bit better.

I have, in discussions, extended the analogy in various ways, and would love to hear if you do too! Please jump into the comments below for that, or to ask a question, or just hit me up on Twitter.

An introduction to HTTP/2 for SEOs was originally posted by Video And Blog Marketing

Help us find the next search industry rising star for SearchLove San Diego 2019

After the success of running community speaker sessions at SearchLove London we are delighted to be bringing them to our American conferences, starting with SearchLove San Diego in March 2019. Our community speaker sessions are 20 minutes long, presented by relatively new speakers, who we support and coach and then give the chance to stand on stage in front of 200+ digital marketers from around the world.

If this sounds like something you would love to experience, then we are on the lookout for speakers who:

  • Are San Diego locals (no further than 2-2.5 hours drive from the venue). We want to support the community where our conference runs and help our speakers raise their local profile.
  • Have some speaking experience, preferably a couple of speaking gigs, and are looking to break onto an even bigger stage.
  • Are available to join us at SearchLove San Diego on March 4 & 5, 2019.

When you’ve had a read of everything that follows, swing by our form and apply for a space. You’ve got just over 2 weeks to apply (deadline: December 19, 2018).

apply now

What’s in it for us and our audience?

Every time we put on a SearchLove show, we (led by our head of events, Lynsey) scour the industry for the best speakers we can find. We often invite back people who blow us away and wow our audience, but we also want to find speakers no-one has seen before. Sometimes we find great speakers who have deep experience in related fields who are underexposed to the search industry, but sometimes we want to be the ones who help people break out for the first time.

In addition to the long game of building partnerships with the great speakers of tomorrow, we believe that these shorter sessions with a little less pressure could end up bringing perspectives and viewpoints we can’t get from our more experienced speakers. Speaking experience often comes with general experience, and that often accompanies moves to management or the growth of the speakers’ own companies. One of the things we also want to see is hands-on advanced and actionable advice from practitioners who are doing the work every day.

We’re also hopeful that we can access a more diverse pool of speakers with different backgrounds and experiences. There are unfortunate barriers in place to getting some of the experience that we typically require and while we put a lot of effort into broadening our intake, we hope that this initiative can play a key role in building the pipeline. (You can tell we’re serious about building a safe, inclusive and welcoming environment for our speakers and delegates from the way we bake our code of conduct into our events, and our recent progress: in an industry with too many manels, SearchLove Boston 2018 was our first conference with >50% women speakers and had women appearing as the top-rated speaker and 3 of the top 5 speakers by overall rating).

We know that we will get a bunch of overconfident white men applying (yes, I see the irony in my writing that) but if that doesn’t describe you, I’d especially encourage you to throw your hat in the ring.

What’s in it for you?

This should be the shortest path from knowing what you’re talking about to getting full speaking opportunities on the industry’s biggest stages. We’ve already seen our SearchLove London community speakers getting accepted to speak at bigger events. We have also seen first hand the speed that you can move from presenting at a meet-up to local conferences to SearchLove, MozCon or Inbound. But up until now, most of the non-Distilled speakers who made it to the SearchLove stage did so after proving themselves at another major conference.

Here’s the full package you’ll receive if you are successful (along with your 20 minutes on stage!):

  1. Introduction call with the Distilled team
  2. Multiple video calls to run through your presentation with the Distilled team and get feedback
  3. Deck review and content call to bounce around your session ideas
  4. 1 to 1 ongoing support from a Distilled team member
  5. Final in-person review with myself and the Distilled team in San Diego before the conference
  6. VIP ticket to attend SearchLove San Diego including attending the VIP dinner with all the other speakers the night before the conference
  7. A nice bunch of Distilled and SearchLove swag

We are extremely excited to be rolling this scheme out to our US events after the success of London (all 3 community  speakers ranked in the top 8 speakers and one broke into the top 3!), we want to give this opportunity to local folks, and so we’ll only be accepting pitches from applicants living within the San Diego area (2-2.5 hours drive from the venue)  for this particular conference.

A note on the video requirement

You will note that the application form asks for a video. We debated whether to include this as a requirement and ultimately came down on the side of including it because by far the biggest limitation we typically have with less experienced speakers’ pitches is an inability to judge how they’ll perform on stage when they don’t have a ton of speaking experience and professional on-stage video. We hope that this is the most inclusive way of achieving this. We’re trying to avoid the too-high hurdle of requiring professional footage and this is something everyone can put together.

We are not expecting you to put together a professionally-lit and shot promo video. We want to see your enthusiasm, public speaking capability, and maybe a bit of your depth of knowledge. A selfie video shot on a mobile phone can totally do the job, but think about how you are going to stand out from the crowd and show us what we need to see. Once you have recorded the video upload it to a hosting platform such as Google Drive, Wistia, YouTube or Vimeo and share the URL in your application form.

In order to avoid asking you to do something I wouldn’t be prepared to do myself, I’ve recorded a short pitch myself. You can see that it’s shot on a phone, and didn’t use any editing:

A personal note

I’ve seen in my own career how powerful it has been to get better at public speaking and also the benefits of appearing on bigger stages. Having run a successful Community Speaker program at our London conference, I know that we can help more people on this journey.

I’ve been lucky enough to get enough of a start at our own events to bootstrap my way to bigger opportunities but I remember the 20 or so people who paid less than 20 bucks each to come to our first meet-up. We have also now built up enough of a support and coaching capability within Distilled that we have helped members of our team go from their first speaking opportunity to highly-rated SearchLove sessions in a matter of months. I want to bring those opportunities to more people. That means YOU.

I would strongly encourage you to think about the actual requirements. Don’t fall prey to imposter syndrome: are there things you are passionate about, where you have deep hands-on knowledge, and where you can teach even an experienced audience new things? If so, don’t sweat your speaking experience – let us be the judge of potential and get your application in.

How to apply

You’ll need to tell us:

  • Why you’d like to speak at SearchLove San Diego
  • Where you are based
  • What your speaking experience looks like so far
  • What topic you’d like to talk about – the more specific and actionable a topic you can describe, the better
  • Remember, the closing date for applications is December 19, 2018

And you’ll need to send us a short video as I described above!

apply now

Help us find the next search industry rising star for SearchLove San Diego 2019 was originally posted by Video And Blog Marketing