Video And Blog Marketing

How a nonprofit leveraged a Value Proposition Workshop to see a 136% increase in conversions

Experiment: Background

Background: Willow Creek Association is a nonprofit organization committed to transforming leaders in their respective communities. Each year, they host a two-day event (Global Leadership Summit) to provide those leaders with training from speakers in successful companies.

Goal:  To increase order conversion

Primary Research Question: How can we create more value to increase conversions?

When The Global Leadership Summit team saw a significant decline in conversions from 2015 to 2016, they established a testing culture to understand why the change. One of the hypotheses behind the decline was removing the incentive, but testing proved it was not the incentive that affected the decline; it was the value proposition. Their next step was to analyze their current page for the gaps in perceived value for the prospect. The GLS team held a Value Proposition Workshop and applied their new learnings to their 2017 homepage for the summit — the results are worth sharing.

Experiment: Control

To begin, let’s focus on the value delivery of the control. At first glance, the GLS team noticed that the page held very little perceived value in the headlines and the copy. The GLS team concluded that the headline “About the GLS” did not give enough value. To a new prospect, who has never heard of The Global Leadership Summit, “GLS” might be a big jump. To assume that the prospect would understand this (or even need this information) is dangerous because it does not meet the prospect where they are in the thought sequence. As marketers, we need to ask ourselves: what is the sequential process in their minds as they enter this page? The prospect will probably ask questions more aligned to: How does this summit benefit me? What do I get out of it? Where is it located? Where do I have to travel? Who will be there? How can this improve my current career path? If marketers fail to ask the correct questions with the prospect in mind, we fail to find the correct answers.

As we journey down the page, we finally come across some useful information for the prospect. There is value in the “Location and Dates” section because it answers these crucial questions the prospect might have: Where is it located? Where do I have to travel? Can this product help me? Answering these questions are great. However, its location on the page is not. What is it doing in the middle of the page? If the page fails to answer these critical questions in the first 4 inches, in combination with prospect’s impatience, the conversion could be lost. The GLS team discovered this is a problem that needed to be addressed.

And finally, after analyzing the entire page, there is absolutely no mention of the speakers in attendance. The GLS team observed that they were neglecting the other crucial questions prospects might have when entering this page, as aforementioned.

Experiment: Treatment

Here is the new Global Leadership Summit page. GLS team extracted the real value of the summit and transferred it to a homepage, only after attending the Value Proposition Workshop. Let’s see how the GLS team addressed the value perception gap.

The GLS team added quantifiable claims in the headline … in the first 4 inches of the page. We can already see a stark difference in the headlines from 2016 and 2017. The larger headline reads “Two days of World Class Leadership Training,” and then goes back to read smaller text above the headline: “You have Influence. Join 400,000 of your peers to learn how to maximize it with …” The smaller text quantifies the number of people in attendance and popularity of the summit, while the larger text uses numbers to start showing instances of the Primary Value Proposition. This is an effective way to initially capture interest and build Credibility.

This headline does not only hold Credibility in the numbers, but there is also Specificity in the blue call-out box at the top of the page. The sub-headline under “The Global Leadership Summit” is specific on the location of the event, which erases the concern for travel arrangements (a potential pain point for prospects) thus, creating value. We will continue to see more of the same information elaborated further below, which creates congruence.

They also added specific information about the speakers. In the control, there was virtually no information about the speakers. In this version, we can see the speakers listed, and additionally, we see that the GLS team provided vital information that fostered conclusions. The GLS team leveraged speaker headshots, names AND positions at their respective companies; this increased the prospect’s perceived value, answering the question: “What do I get out of this?”

And finally, they added value throughout the page. At MarketingExperiments, we call this Congruence. At the top of the page, there was copy that read “convenient location near you.” Although the “Location near you” section seems far from the top, the GLS team still alluded this Primary Value Proposition in the main headline. Since this is the expanded section of the main Value Proposition, it creates congruence and reaffirms to the prospect that there is value.

Experiment: Results

So, what does the GLS team get from building credibility and being specific? Not just a forceful Value Proposition, but more than double the conversions.

Without value, you are doing nothing for the prospect

As blunt as that may seem, the truth is the truth. People do not spend time delving into webpages or emails without knowing they are receiving something at the other end. Friends, marketers, do not waste your time replicating other webpages with their nonsense information, designs and vernacular; instead, test and use the prospect’s thought sequence. Ask the right questions to get the right answers. These tools will give you the results that you want for your company.

For more about our value proposition training, click here. To watch The Global Leadership Summit webinar, click here.

How a nonprofit leveraged a Value Proposition Workshop to see a 136% increase in conversions was originally posted by Video And Blog Marketing

Advertisements

Proposing Better Ways to Think about Internal Linking

I’ve long thought that there was an opportunity to improve the way we think about internal links, and to make much more effective recommendations. I feel like, as an industry, we have done a decent job of making the case that internal links are important and that the information architecture of big sites, in particular, makes a massive difference to their performance in search (see: 30-minute IA audit and DistilledU IA module).

And yet we’ve struggled to dig deeper than finding particularly poorly-linked pages, and obviously-bad architectures, leading to recommendations that are hard to implement, with weak business cases.

I’m going to propose a methodology that:

  1. Incorporates external authority metrics into internal PageRank (what I’m calling “local PageRank”) to take pure internal PageRank which is the best data-driven approach we’ve seen for evaluating internal links and avoid its issues that focus attention on the wrong areas

  2. Allows us to specify and evaluate multiple different changes in order to compare alternative approaches, figure out the scale of impact of a proposed change, and make better data-aware recommendations

Current information architecture recommendations are generally poor

Over the years, I’ve seen (and, ahem, made) many recommendations for improvements to internal linking structures and information architecture. In my experience, of all the areas we work in, this is an area of consistently weak recommendations.

I have often seen:

  • Vague recommendations – (“improve your information architecture by linking more to your product pages”) that don’t specify changes carefully enough to be actionable

  • No assessment of alternatives or trade-offs – does anything get worse if we make this change? Which page types might lose? How have we compared approach A and approach B?

  • Lack of a model – very limited assessment of the business value of making proposed changes – if everything goes to plan, what kind of improvement might we see? How do we compare the costs of what we are proposing to the anticipated benefits?

This is compounded in the case of internal linking changes because they are often tricky to specify (and to make at scale), hard to roll back, and very difficult to test (by now you know about our penchant for testing SEO changes – but internal architecture changes are among the trickiest to test because the anticipated uplift comes on pages that are not necessarily those being changed).

In my presentation at SearchLove London this year, I described different courses of action for factors in different areas of this grid:

It’s tough to make recommendations about internal links because while we have a fair amount of data about how links generally affect rankings, we have less information specifically focusing on internal links, and so while we have a high degree of control over them (in theory it’s completely within our control whether page A on our site links to page B) we need better analysis:

The current state of the art is powerful for diagnosis

If you want to get quickly up to speed on the latest thinking in this area, I’d strongly recommend reading these three articles and following their authors:

  1. Calculate internal PageRank by Paul Shapiro

  2. Using PageRank for internal link optimisation by Jan-Willem Bobbink

  3. Easy visualizations of PageRank and page groups by Patrick Stox

A load of smart people have done a ton of thinking on the subject and there are a few key areas where the state of the art is powerful:

There is no doubt that the kind of visualisations generated by techniques like those in the articles above are good for communicating problems you have found, and for convincing stakeholders of the need for action. Many people are highly visual thinkers, and it’s very often easier to explain a complex problem with a diagram. I personally find static visualisations difficult to analyse, however, and for discovering and diagnosing issues, you need data outputs and / or interactive visualisations:

But the state of the art has gaps:

The most obvious limitation is one that Paul calls out in his own article on calculating internal PageRank when he says:

“we see that our top page is our contact page. That doesn’t look right!”

This is a symptom of a wider problem which is that any algorithm looking at authority flow within the site that fails to take into account authority flow into the site from external links will be prone to getting misleading results. Less-relevant pages seem erroneously powerful, and poorly-integrated pages that have tons of external links seem unimportant in the pure internal PR calculation.

In addition, I hinted at this above, but I find visualisations very tricky – on large sites, they get too complex too quickly and have an element of the Rorschach to them:

My general attitude is to agree with O’Reilly that “Everything looks like a graph but almost nothing should ever be drawn as one”:

All of the best visualisations I’ve seen are nonetheless full link-graph visualisations – you will very often see crawl-depth charts which are in my opinion even harder to read and obscure even more information than regular link graphs. It’s not only the sampling but the inherent bias of only showing links in the order discovered from a single starting page – typically the homepage – which is useful only if that’s the only page on your site with any external links. This Sitebulb article talks about some of the challenges of drawing good crawl maps:

But by far the biggest gap I see is the almost total lack of any way of comparing current link structures to proposed ones, or for comparing multiple proposed solutions to see a) if they fix the problem, and b) which is better. The common focus on visualisations doesn’t scale well to comparisons – both because it’s hard to make a visualisation of a proposed change and because even if you can, the graphs will just look totally different because the layout is really sensitive to even fairly small tweaks in the underlying structure.

Our intuition is really bad when it comes to iterative algorithms

All of this wouldn’t be so much of a problem if our intuition was good. If we could just hold the key assumptions in our heads and make sensible recommendations from our many years of experience evaluating different sites.

Unfortunately, the same complexity that made PageRank such a breakthrough for Google in the early days makes for spectacularly hard problems for humans to evaluate. Even more unfortunately, not only are we clearly bad at calculating these things exactly, we’re surprisingly bad even at figuring them out directionally. [Long-time readers will no doubt see many parallels to the work I’ve done evaluating how bad (spoiler: really bad) SEOs are at understanding ranking factors generally].

I think that most people in the SEO field have a high-level understanding of at least the random surfer model of PR (and its extensions like reasonable surfer). Unfortunately, most of us are less good at having a mental model for the underlying eigenvector / eigenvalue problem and the infinite iteration / convergence of surfer models is troublesome to our intuition, to say the least.

I explored this intuition problem recently with a really simplified example and an unscientific poll:

The results were unsurprising – over 1 in 5 people got even a simple question wrong (the right answer is that a lot of the benefit of the link to the new page flows on to other pages in the site and it retains significantly less than an Nth of the PR of the homepage):

I followed this up with a trickier example and got a complete lack of consensus:

The right answer is that it loses (a lot) less than the PR of the new page except in some weird edge cases (I think only if the site has a very strange external link profile) where it can gain a tiny bit of PR. There is essentially zero chance that it doesn’t change, and no way for it to lose the entire PR of the new page.

Most of the wrong answers here are based on non-iterative understanding of the algorithm. It’s really hard to wrap your head around it all intuitively (I built a simulation to check my own answers – using the approach below).

All of this means that, since we don’t truly understand what’s going on, we are likely making very bad recommendations and certainly backing them up and arguing our case badly.

Doing better part 1: local PageRank solves the problems of internal PR

In order to be able to compare different proposed approaches, we need a way of re-running a data-driven calculation for different link graphs. Internal PageRank is one such re-runnable algorithm, but it suffers from the issues I highlighted above from having no concept of which pages it’s especially important to integrate well into the architecture because they have loads of external links, and it can mistakenly categorise pages as much stronger than they should be simply because they have links from many weak pages on your site.

In theory, you get a clearer picture of the performance of every page on your site – taking into account both external and internal links – by looking at internet-wide PageRank-style metrics. Unfortunately, we don’t have access to anything Google-scale here and the established link data providers have only sparse data for most websites – with data about only a fraction of all pages.

Even if they had dense data for all pages on your site, it wouldn’t solve the re-runnability problem – we wouldn’t be able to see how the metrics changed with proposed internal architecture changes.

What I’ve called “local” PageRank is an approach designed to attack this problem. It runs an internal PR calculation with what’s called a personalization vector designed to capture external authority weighting. This is not the same as re-running the whole PR calculation on a subgraph – that’s an extremely difficult problem that Google spent considerable resources to solve in their caffeine update. Instead, it’s an approximation, but it’s one that solves the major issues we had with pure internal PR of unimportant pages showing up among the most powerful pages on the site.

Here’s how to calculate it:

The next stage requires data from an external provider – I used raw mozRank – you can choose whichever provider you prefer, but make sure you are working with a raw metric rather than a logarithmically-scaled one, and make sure you are using a PageRank-like metric rather than a raw link count or ML-based metric like Moz’s page authority:

You need to normalise the external authority metric – as it will be calibrated on the entire internet while we need it to be a probability vector over our crawl – in other words to sum to 1 across our site:

We then use the NetworkX PageRank library to calculate our local PageRank – here’s some outline code:

What’s happening here is that by setting the personalization parameter to be the normalised vector of external authorities, we are saying that every time the random surfer “jumps”, instead of returning to a page on our site with uniform random chance, they return with probabilities proportional to the external authorities of those pages. This is roughly like saying that any time someone leaves your site in the random surfer model, they return via the weighted PageRank of the external links to your site’s pages. It’s fine that your external authority data might be sparse – you can just set values to zero for any pages without external authority data – one feature of this algorithm is that it’ll “fill in” appropriate values for those pages that are missing from the big data providers’ datasets.

In order to make this work, we also need to set the alpha parameter lower than we normally would (this is the damping parameter – normally set to 0.85 in regular PageRank – one minus alpha is the jump probability at each iteration). For much of my analysis, I set it to 0.5 – roughly representing the % of site traffic from external links – approximating the idea of a reasonable surfer.

There are a few things that I need to incorporate into this model to make it more useful – if you end up building any of this before I do, please do let me know:

  • Handle nofollow correctly (see Matt Cutts’ old PageRank sculpting post)

  • Handle redirects and rel canonical sensibly

  • Include top mR pages (or even all pages with mR) – even if they’re not in the crawl that starts at the homepage

    • You could even use each of these as a seed and crawl from these pages

  • Use the weight parameter in NetworkX to weight links by type to get closer to reasonable surfer model

    • The extreme version of this would be to use actual click-data for your own site to calibrate the behaviour to approximate an actual surfer!

Doing better part 2: describing and evaluating proposed changes to internal linking

After my frustration at trying to find a way of accurately evaluating internal link structures, my other major concern has been the challenges of comparing a proposed change to the status quo, or of evaluating multiple different proposed changes. As I said above, I don’t believe that this is easy to do visually as most of the layout algorithms used in the visualisations are very sensitive to the graph structure and just look totally different under even fairly minor changes. You can obviously drill into an interactive visualisation of the proposed change to look for issues, but that’s also fraught with challenges.

So my second proposed change to the methodology is to find ways to compare the local PR distribution we’ve calculated above between different internal linking structures. There are two major components to being able to do this:

  1. Efficiently describing or specifying the proposed change or new link structure; and

  2. Effectively comparing the distributions of local PR – across what is likely tens or hundreds of thousands of pages

How to specify a change to internal linking

I have three proposed ways of specifying changes:

1. Manually adding or removing small numbers of links

Although it doesn’t scale well, if you are just looking at changes to a limited number of pages, one option is simply to manipulate the spreadsheet of crawl data before loading it into your script:

2. Programmatically adding or removing edges as you load the crawl data

Your script will have a function that loads  the data from the crawl file – and as it builds the graph structure (a DiGraph in NetworkX terms – which stands for Directed Graph). At this point, if you want to simulate adding a sitewide link to a particular page, for example, you can do that – for example if this line sat inside the loop loading edges, it would add a link from every page to our London SearchLove page:

site.add_edges_from([(edge['Source'],
'https://www.distilled.net/events/searchlove-london/')])

You don’t need to worry about adding duplicates (i.e. checking whether a page already links to the target) because a DiGraph has no concept of multiple edges in the same direction between the same nodes, so if it’s already there, adding it will do no harm.

Removing edges programmatically is a little trickier – because if you want to remove a link from global navigation, for example, you need logic that knows which pages have non-navigation links to the target, as you don’t want to remove those as well (you generally don’t want to remove all links to the target page). But in principle, you can make arbitrary changes to the link graph in this way.

3. Crawl a staging site to capture more complex changes

As the changes get more complex, it can be tough to describe them in sufficient detail. For certain kinds of changes, it feels to me as though the best way to load the changed structure is to crawl a staging site with the new architecture. Of course, in general, this means having the whole thing implemented and ready to go, the effort of doing which negates a large part of the benefit of evaluating the change in advance. We have a secret weapon here which is that the “meta-CMS” nature of our ODN platform allows us to make certain changes incredibly quickly across site sections and create preview environments where we can see changes even for companies that aren’t customers of the platform yet.

For example, it looks like this to add a breadcrumb across a site section on one of our customers’ sites:

There are a few extra tweaks to the process if you’re going to crawl a staging or preview environment to capture internal link changes – because we need to make sure that the set of pages is identical in both crawls so we can’t just start at each homepage and crawl X levels deep. By definition we have changed the linking structure and therefore will discover a different set of pages. Instead, we need to:

  • Crawl both live and preview to X levels deep

  • Combine into a superset of all pages discovered on either crawl (noting that these pages exist on both sites – we haven’t created any new pages in preview)

  • Make lists of pages missing in each crawl and crawl those from lists

Once you have both crawls, and both include the same set of pages, you can re-run the algorithm described above to get the local PageRanks under each scenario and begin comparing them.

How to compare different internal link graphs

Sometimes you will have a specific problem you are looking to address (e.g. only y% of our product pages are indexed) – in which case you will likely want to check whether your change has improved the flow of authority to those target pages, compare their performance under proposed change A and proposed change B etc. Note that it is hard to evaluate losers with this approach – because the normalisation means that the local PR will always sum to 1 across your whole site so there always are losers if there are winners – in contrast to the real world where it is theoretically possible to have a structure that strictly dominates another.

In general, if you are simply evaluating how to make the internal link architecture “better”, you are less likely to jump to evaluating specific pages. In this case, you probably want to do some evaluation of different kinds of page on your site – identified either by:

  1. Labelling them by URL – e.g. everything in /blog or with ?productId in the URL

  2. Labelling them as you crawl

    1. Either from crawl structure – e.g. all pages 3 levels deep from the homepage, all pages linked from the blog etc)

    2. Or based on the crawled HTML (all pages with more than x links on them, with a particular breadcrumb or piece of meta information labelling them)

  3. Using modularity to label them automatically by algorithmically grouping pages in similar “places” in the link structure

I’d like to be able to also come up with some overall “health” score for an internal linking structure – and have been playing around with scoring it based on some kind of equality metric under the thesis that if you’ve chosen your indexable page set well, you want to distribute external authority as well throughout that set as possible. This thesis seems most likely to hold true for large long-tail-oriented sites that get links to pages which aren’t generally the ones looking to rank (e.g. e-commerce sites). It also builds on some of Tom Capper’s thinking (videoslides, blog post) about links being increasingly important for getting into Google’s consideration set for high-volume keywords which is then reordered by usage metrics and ML proxies for quality.

I have more work to do here, but I hope to develop an effective metric – it’d be great if it could build on established equality metrics like the Gini Coefficient. If you’ve done any thinking about this, or have any bright ideas, I’d love to hear your thoughts in the comments, or on Twitter.

Proposing Better Ways to Think about Internal Linking was originally posted by Video And Blog Marketing

How to Get More SEO Value from your Nofollow Links

How to Get More Value from your Nofollow Links cover

The presence of nofollow links is important for your website, as they help generate more traffic to your pages. This makes them a very crucial element in your link building campaign and helps your SEO. It’s ideal to get the most value out of these links, which is why they are essential to any website.

With this in mind, here are some things that you need to know about nofollow links, along with some great strategies that you can utilize effectively improve your overall campaign.

Nofollow Links

Nofollow Links

A nofollow link is a type of link in a web page that a search engine does not acknowledge. This means that the authority and rank of the web page will not be affected by those links. Nofollow links help prevent spam and low-quality web pages from spreading and has also helped user-generated content gain more traffic as well. You can use a plugin in your browser that detects nofollow links in a page and highlights in dotted box. Here’s what it looks like:

nofollow extension screenshotThen, you can check the underlying code to see if it’s really a nofollow link.

rel nofollow screenshot

A good number of websites today apply nofollow to most of their external links. These links still hold value, as they allow potential visitors to enter a certain web page, which in turn increases traffic for that specific page. Some instances where nofollow links are usually used are:

  • Social Media Profiles: Any link posted in these accounts are nofollow links, and these include your Twitter, Facebook, Instagram, and LinkedIn accounts.
  • Article Sources: Some articles cite their sources from other websites to provide proof and credibility to their content. It’s important to remember that most of these citations are nofollow links. However, this is effective in increasing the traffic that goes into the linked page or cited source.
  • Forums/Communities: Websites like Quora or Reddit contain numerous content that users openly submit a post. Some of these content contain links to the user’s website or any other website that they might have cited. Forums like these usually nofollow these links.
  • Comments: Some websites allow users to give their insights with regards to the content, which provides solid user input to the site. Any links that are in the comments section are nofollow links, which makes sure that the website does not follow any unwanted links that may affect overall page quality.

Effective Strategies to Boost the Value of Nofollow Links

Nofollow links help with connecting users to different kinds of content on the internet. Here are some strategies or best practices you can use to help give it even more value.

Utilize the Power of Social Media

Most people nowadays access the internet using their mobile devices, such as phones and tablets. This has made content much more accessible and convenient, which makes nofollow links very important. For example, if you have some published content that you would like other users to see, like a review or a blog, you can post the link on a social media website, like Twitter – even if the link is nofollow.

Nofollow Twitter screenshot

You can also utilize the hashtags of Instagram, which would help people associate your links with your content. This works handily on current news and even the latest events. Social media has the power to spread and promote your content to a wider audience, so use it well.

Republish your Content on other Platforms

There are many different platforms where you can publish and distribute content, which makes it essential for you to diversify and expand your existing platforms. Simply put, make use of different platforms in your niche to republish content.

For example, if you published an article in a news site, you can republish it on a different site as well, provided that no one claims exclusity towards that certain article. While doing so, you can link it all back to the original post, which helps promote your content and through the effective use of your nofollow links, increase traffic to your page. This is similar to promoting your content across different social media accounts – which will not hinder your SEO.

Make use of Quora Contributions

Quora is a great source for a wide variety of inquiries that range from various topics like entertainment, education, sports, and much more. A good number of these inquiries have seen some helpful and positive responses from users, some of which have external links that help the users. These external links are examples of effective nofollow links, which can help users access different kinds of content around the internet.

Nofollow Quora screenshot

Low-Popularity Blogs

While they may not generate as much traffic as their high-popularity counterparts, comments on low-popularity blogs are great sources of good nofollow links. The reason is that your links are seen as more authentic and reputable, which would add even more value. This can also lead to different users citing your content and looking at you as an important source for their topics.

Key Takeaway

Nofollow links are essential when it comes to link building and SEO, which is why using these strategies would definitely bring about more positive results to your website and content.

Nofollow links would not transfer any link juice, but it still does an adequate job on increasing the traffic a page receives. Also, clicking the link is only the first part of the user’s journey in your page/website, it’s your job to make them stay in your page and make them a loyal follower. Giving your nofollow links more value, ensure that you would be getting a good amount of traffic to your webpages, and improve your SEO goals.

If you have any questions or insights about nofollow links and SEO in general, leave a comment below and let’s talk.

How to Get More SEO Value from your Nofollow Links was originally posted by Video And Blog Marketing

Does Tomorrow Deliver Topical Search Results at Google?

The Oldest Pepper Tree in California

At one point in time, search engines such as Google learned about topics on the Web from sources such as Yahoo! and the Open Directory Project, which provided categories of sites, within directories that people could skim through to find something that they might be interested in.

Those listings of categories included hierarchical topics and subtopics; but they were managed by human beings and both directories have closed down.

In addition to learning about categories and topics from such places, search engines used to use such sources to do focused crawls of the web, to make sure that they were indexing as wide a range of topics as they could.

It’s possible that we are seeing those sites replaced by sources such as Wikipedia and Wikidata and Google’s Knowledge Graph and the Microsoft Concept Graph.

Last year, I wrote a post called, Google Patents Context Vectors to Improve Search. It focused upon a Google patent titled User-context-based search engine.

In that patent we learned that Google was using information from knowledge bases (sources such as Yahoo Finance, IMDB, Wikipedia, and other data-rich and well organized places) to learn about words that may have more than one meaning.

An example from that patent was that the word “horse” has different meanings in different contexts.

To an equestrian, a horse is an animal. To a carpenter, a horse is a work tool when they do carpentry. To a gymnast, a horse is a piece of equipment that they perform manuevers upon during competitions with other gymnasts.

A context vector takes these different meanings from knowledge bases, and the number of times they are mentioned in those places to catalogue how often they are used in which context.

I thought knowing about context vectors was useful for doing keyword research, but I was excited to see another patent from Google appear where the word “context” played a featured role in the patent. When you search for something such as a “horse”, the search results you recieve are going to be mixed with horses of different types, depending upon the meaning. As this new patent tells us about such search results:

The ranked list of search results may include search results associated with a topic that the user does not find useful and/or did not intend to be included within the ranked list of search results.

If I was searching for a horse of the animal type, I might include another word in my query that identified the context of my search better. The inventors of this new patent seem to have a similar idea. The patent mentions

In yet another possible implementation, a system may include one or more server devices to receive a search query and context information associated with a document identified by the client; obtain search results based on the search query, the search results identifying documents relevant to the search query; analyze the context information to identify content; and generate a group of first scores for a hierarchy of topics, each first score, of the group of first scores, corresponding to a respective measure of relevance of each topic, of the hierarchy of topics, to the content.

From the pictures that accompany the patent it looks like this context information is in the form of Headings that appear above each search result that identify Context information that those results fit within. Here’s a drawing from the patent showing off topical search results (showing rock/music and geology/rocks):

Search Results in Context
Different types of ‘rock’ on a search for ‘rock’ at Google

This patent does remind me of the context vector patent, and the two processes in these two patents look like they could work together. This patent is:

Context-based filtering of search results
Inventors: Sarveshwar Duddu, Kuntal Loya, Minh Tue Vo Thanh and Thorsten Brants
Assignee: Google Inc.
US Patent: 9,779,139
Granted: October 3, 2017
Filed: March 15, 2016

Abstract

A server is configured to receive, from a client, a query and context information associated with a document; obtain search results, based on the query, that identify documents relevant to the query; analyze the context information to identify content; generate first scores for a hierarchy of topics, that correspond to measures of relevance of the topics to the content; select a topic that is most relevant to the context information when the topic is associated with a greatest first score; generate second scores for the search results that correspond to measures of relevance, of the search results, to the topic; select one or more of the search results as being most relevant to the topic when the search results are associated with one or more greatest second scores; generate a search result document that includes the selected search results; and send, to a client, the search result document.

It will be exciting to see topical search results start appearing at Google.


Copyright © 2017 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

Does Tomorrow Deliver Topical Search Results at Google? was originally posted by Video And Blog Marketing

The SEO Apprentice’s Toolbox: Gearing Up for Analysis

Being new to SEO is tricky. As a niche market within a niche market there many tools and resources unfamiliar to most new professionals. And with so much to learn it is nearly impossible to start real client work without first dedicating six months exclusively to industry training. Well…that’s how it may seem at first.

While it may be intimidating, investigating real-world problems is the best way to learn SEO. It exposes you to industry terminology, introduces you to valuable resources and gets you asking the right questions.

As a fairly new Analyst at Distilled, I know from experience how difficult it can be to get started. So here’s a list of common SEO analyses and supporting tools that may help you get off on the right foot.

Reviewing on-page elements

Page elements are essential building blocks of any web page. And pages with missing or incorrect elements risk not being eligible for search traffic. So checking these is necessary for identifying optimization opportunities and tracking changes. You can always go to the HTML source code and manually identify these problems yourself, but if you’re interested in saving a bit of time and hassle, Ayima’s Google Chrome extension Page Insights is a great resource.

This neat little tool identifies on-page problems by analyzing 24 common on-page issues for the current URL and comparing them against a set of rules and parameters. It then provides a list of all issues found, grouped into four priority levels: Errors, Warnings, Notices and Page Info. Descending from most to least severe, the first 3 categories (Errors, Warnings & Notices) identify all issues that could impact organic traffic for the page in question. The last category (Page Info) provides exact information about certain elements of the page.

For every page you visit Page Insights will give a warning next to its icon, indicating how many vulnerabilities were found on the page.

Clicking on the icon gives you a drop-down listing the vulnerabilities and page information found.

What makes this tool so useful is that it also provides details about each issue, like how it can cause harm to the page and correction opportunities. In this example, we can see that this web page is missing an H1 tag, but in this case, could be corrected by adding anH1 tag around the page’s current heading (which is not coded as an H1).

In a practical setting, Page Insights is great for quickly identify common on-page issues that should be fixed to ensure best SEO practice.

Additional tools for reviewing on-page elements:

Supplemental readings:

Analyzing page performance

Measuring the load functionality and speed of a page is an important and common practice since both metrics are correlated with user experience and are highly valued by search engines. There are a handful of tools that are applicable to this task but because of its large quantity of included metrics, I recommend using WebPagetest.org.

Emulating various browsers, this site allows users to measure the performance of a web page from different locations. After sending a real-time page request, WebPagetest provides a sample of three tests containing request details, such as the complete load time, the load time breakdown of all page content, and a final image of the rendered page. There are various configuration settings and report types within this tool, but for most analyses, I have found that running a simple test and focusing on the metrics presented in the Performance Results supply ample information.

There are several metrics presented in this report, but data provided in Load Time and First Byte work great for most checks. Factoring in Google’s suggestion to have desktop load time no greater than 2 seconds and a time to first byte of 200ms or less, we can gauge whether or not a page’s speed is properly optimized.

Prioritizing page speed performance areas

Knowing if a page needs to improve its performance speed is important, but without knowing what areas need improving you can’t begin to make proper corrections. Using WebPagetest in tandem with Google’s PageSpeed Insights is a great solution for filling in this gap.

Free for use, this tool measures a page’s desktop and mobile performance to evaluate whether it has applied common performance best practices. Scored on a scale of 0-100 a page’s performance can fall into one of three categories: Good, Needs Work or Poor. However, the key feature of this tool, which makes it so useful for page speed performance analysis, is its optimization list.

Located below the review score, this list highlights details related to possible optimization areas and good optimization practices currently in place on the page. By clicking the “Show how to fix” drop down for each suggestion you will see information related to the type of optimization found, why to implement changes and specific elements to correct.

In the image above, for example, compressing two images to reduce the amount bytes that need to be loaded can improve this web page’s speed. By making this change the page could expect a reduction in image byte size by 28%.

Using WebPagetest and PageSpeed Insights together can give you a comprehensive view of a page’s speed performance and assist in identifying and executing on good optimization strategies.

Additional tools for analyzing page performance:

Supplemental readings:

Investigating rendering issues

How Googlebot (or Bingbot or MSNbot) crawls and renders a page can be completely different from what is intended, and typically occurs as a result of the crawler being blocked by a robots.txt file. If Google sees an incomplete or blank page it assumes the user is having the same experience and could affect how that page performs in the SERPs. In these instances, the Webmaster tool Fetch as Google is ideal for identifying how Google renders a page.

Located in Google Search Console, Fetch as Google allows you to test if Googlebot can access pages of a site, identify how it renders the page and determines if any resources are blocked from the crawler.

When you look up a specific URL (or domain) Fetch as Google gives you two tabs of information: fetching, which displays the HTTP response of the specified URL; and rendering, which runs all resources on the page, provides a visual comparison of what Googlebot sees against what (Google estimates) the user sees and lists all resources Googlebot was not able to acquire.

For an analysis application, the rendering tab is where you need to look. Begin by checking the rendering images to ensure both Google and the user are seeing the same thing. Next, look at the list to see what resources were unreachable by Googlebot and why. If the visual elements are not displaying a complete page and/or important page elements are being blocked from Googlebot, there is an indication that the page is experiencing some rendering issues and may perform poorly in the search engine.

Additional tools for investigating rendering issues:

Supplemental readings:

Checking backlink trends

Quality backlinks are extremely important for making a strong web page, as they indicate to search engines a page’s reliability and trustworthiness. Changes to a backlink profile could easily affect how it is ranked in the SERPs, so checking this is important for any webpage/website analysis. A testament to its importance, there are several tools dedicated to backlinks analytics. However, I have a preference for the site Ahrefs due to its comprehensive yet simple layout, which makes it great for on-the-spot research.

An SEO tool well known for its backlink reporting capabilities, Ahrefs measures several backlink performance factors and displays them in a series of dashboards and graphs. While there is plenty to review, for most analysis purposes I find the “Backlinks” metric and “New & lost backlinks” graph to be the best places to focus.

Located under the Site Explorer tab, “Backlinks” identifies the total number of backlinks pointing to a target website or URL. It also shows the quantitative changes in these links over the past 7 days with the difference represented by either a red (negative growth) or green (positive growth) subscript. In a practical setting, this information is ideal for providing quick insight into current backlink trend changes.

Under the same tab, the “New & lost backlinks” graph provides details about the total number of backlinks gained and lost by the target URL over a period of time.

The combination of these particular features works very well for common backlink analytics, such as tracking backlinks profile changes and identifying specific periods of link growth or decline.

Additional tools for checking backlink trends:

Supplemental readings:

Creating your toolbox

This is only a sample of tools you can use for your SEO analyses and there are plenty more, with their own unique strengths and capabilities, available to you. So make sure to do your research and play around to find what works.

And if you are to take away only one thing from this post, just remember that as you work to build your own personal toolbox what you choose to include should best work for your needs and the needs of your clients.

The SEO Apprentice’s Toolbox: Gearing Up for Analysis was originally posted by Video And Blog Marketing

Gen X, Gen Y Targeting: How to Target Different Generations on Social Media

hv-blog-ppc-ecommerce Gen X, Gen Y Targeting: How to Target Different Generations on Social Media

Highly compelling copy—the kind that rattles the questions of your audience, and shockingly, (and instantly) gives them the answer—isn’t easy to do. It’s an even greater challenge when you’re targeting multiple generations at once. In this article, you’ll learn the three components of turning ‘good’ copy into ‘I gotta buy this’ branding, no matter how old or young your viewers may be.

First Up: Know the Difference Between X and Y (Even Z!)

While you may know a millennial when you see one, do you know how to reach them in a sentence or two? Do you know how to turn a Generation X viewer (otherwise known as a ‘latchkey kid’) into a devoted consumer?  Don’t sweat it out. Instead, learn the key psychological components of what makes them ‘click and buy’ – and then sit back, ride the waves of high sales and repeat:

  • Generation X. Generation X are those who were born between 1966-1976. From no technology to highly sophisticated technology, they’ve come a long way. However, they also tend to get overwhelmed by too much digital ‘noise.’ Consider Generation X your highest educated generation (26% have a bachelor’s degree, or higher). Lead them into full-fledged online engagement by education based products, facts or interview-type posts.
  • Generation Y. Generation Y, otherwise known as the Millennials, dominate on social media. Born between 1977-1994 they are career professionals, many of whom learned in the early 2000’s how to create a website, sell a product and market it to the masses.

Now in their 30’s and 40’s, they are prone to ignoring the typical marketing pitch, and look for compelling content, fine-tuned sales funnels and impeccable images to stay engaged.  They’ve been in the presence of technology since childhood, and crave a variety of digital content, such as ezines, podcasts, and blog posts. Blast them with a variety of marketing content and you’ll keep them engaged and excited for what’s next.

Want to grab hundreds (or thousands) of leads within days and watch your business skyrocket? Give them a free gift (like an ebook or meditation track) in exchange for their email. They want instant gratification now, and they’ll become a loyal customer if you can offer something of high value.

  • Generation Z. There isn’t as much market research for Generation Z, because they’re the ‘babies’ of all generations. Born between 1995-2012, they may grow up to be the most technologically savvy of any generation. Chances are, they will be raised to expect diversity in their classroom (and online learning, such as social media), through interactive learning platforms (live streaming, anyone?). One of the greatest marketing tactics you can use to reach them as they grow up? Interactive webinars, and eCourse platforms.

Three Techniques to Reach Each Generation on Social Media Instantly

Now that you know what each generation is about, it’s time to dive head first into targeting each generation on social media—and meet them where they already are. Here they are: three generations, and three strategies that payoff:

  • Generation X. The baby boomer generation is most responsive to emails, so use it to your advantage! Create an email marketing campaign (do this easily by using a service like Aweber or MailChimp) and grab their attention with three inspiring or informative blog posts a week—sent right to their inbox.

Being the most financial responsible of all generations, hook their loyalty with online deals, home ownership tips, or ‘freebies’ (for example: offer free customization on a product you sell).

  • Generation Y. Ah, the millennials. The generation that makes up for over 70 million, and the potential to become your target audience, alone. Sharp, smart and innovative, they are thirsty for more knowledge constantly.

According to The New York Times, over 64% of millennials would rather make less than $45,000 a year doing what they love than make more money, doing something they aren’t inspired by.

Create content that inspires them. Promote a product that will trigger fearlessness in them to do what they love, increase their productivity, or allow them to instantly achieve greater work/life balance. Just make sure it’s 100% ‘you’. Don’t fake your way to building a brand. Speak, write about, or post videos that are in alignment with your vision, and that you’re passionate about. When you’re ‘you’ and your brand is authentic, they’ll beg for more.

  • Generation Z. According to a study done by Forbes.com, Generation Z spends 74% of their time on social media While they might not be as technologically experienced as Generation Y, they’re on social media constantly—and expect instant contact. Cater to your generation Z audience by developing (and maintaining) a strong presence on Snapchat, or with Instagram Stories (they respond well to stunning visuals and fun filters!) Kill two birds with one stone by live streaming videos on Facebook, and/or weekly YouTube videos.

Do you know what generation your audience is composed of? Or, is it a combination X, Y and Z? To know your audience is to compel them to fall in love with your brand. Use the above techniques and see what works best!

Gen X, Gen Y Targeting: How to Target Different Generations on Social Media was originally posted by Video And Blog Marketing

Three Powerful Strategies to Hack On-Page SEO [Case Study]

Three Powerful Strategies to Hack On-Page SEO

As part of the WordLift team and as a blogger, I run several experiments each week to see what kind of strategies can help hack on-page SEO. These experiments provided more in-depth information regarding on-page SEO, along with the discovery of how to properly do it to help my SEO campaign.

In this article, I want to show you three factors that affect on-page SEO up to the point that if you learn to leverage them, you can hack on-page SEO.

We’ll start with a practical case study. I asked Google “what is a featured snippet,” and this is what I got:

featured snippet definition

How did they know what to show? And why is this crucial for your on-page SEO?

But, before we delve on the case study, let me clarify the main difference between on-page and off-page SEO and what they really mean:

SEO 101: Off-Page vs. On-Page SEO

When you want to make your website rank through search engines, like Google, you learn right away that there are two kinds of strategies to adopt: on-page and off-page. Often the two walk hand in hand. Yet some content marketers choose either the former or the latter.

What is the main difference?

Each time Google ranks a page, it crawls and indexes the page. Therefore, before your pages get in Google’s ranking, you may want to make sure that the previous steps of the process are taken care of, which you can do by simply publishing content or submitting your sitemap to search engines.

What we look at in this article is the final part of the process- ranking. How does a search engine decide whether a page or website deserves to rank?

It looks at authority and relevance.

Simply put, authority is about off-page SEO; while relevance is about on-page SEO.

A good proxy for authority is domain and page authority. Those are two metrics that MOZ created to assess how a domain or a page are seen through Google’s lenses.

Domain authority is connected to off-page SEO activity. In short, when other sites link to your website, from your website’s perspective those are called backlinks. The more quality backlinks you get, the more your domain authority will improve over time.

The other factor is relevance. Relevance is about on-page SEO. How is relevance assessed? There are several ways to evaluate how relevant a page can be. The main ones would be: keywords, internal linking, page formatting, user experience, and quality of data.

If I had to put it in a simple diagram, it would be:

Ranking factors diagram

Now that we clarified the distinction between on-page and off-page SEO let’s jump into the three steps to hack your on-page.

Step One: Target the Featured Snippet

A featured snippet is a box you see before any result on Google.

featured snippet screenshot

The objective is clear: give a quick answer to a user’s question.

By targeting the featured snippet, you’re going after Rank Zero. In other words, you’re not anymore targeting the first positions in Google’s SERP (Search Engine Results Page).

You may ask, why do I go for the featured snippet when I don’t even get my page to rank first? That’s the thing. Ranking first on Google isn’t enough anymore. In fact, in a few cases, Google takes content that comes from the other nine results and places it as a featured snippet.

You can see it also in this example:

Ahrefs featured snippet ranking

On the long-tail query “what is a featured snippet”, Google is using the content coming from Ahrefs.com even though it is ranking 9th in the SERP.

Yet this is not the primary reason you want to target the featured snippet. For instance, when I compared Ahrefs and Search Engine Watch’s domain and page authority with Open Site Explorer I was stunned with the results:

open site explorer domain and page authority screenshot

Although the authority of both pages are 68; when it comes to domain authority, Ahrefs loses against Search Engine Watch.

So, why did Ahrefs gain the featured snippet? And how can you get a featured snippet when your domain authority is lower, and you’re not even ranking as number one?

First, regarding this long-tail keyword, Ahrefs won because it targeted Google’s featured snippet. In fact, when you focus on the featured snippet you shift your mindset. Therefore, your overall on-page SEO strategy changes accordingly.

How? You start doing the following things:

  • Optimize for long-tail keywords
  • Look for user intent to come up with a list of questions
  • Structure your content to give accurate answers

In short, by changing your mindset, you automatically apply a whole new set of tactics that will make your content less conversational and more transactional. Also, your overall strategy will change. You’ll stop thinking about keywords and start focusing on user intent. That will bring your on-page SEO to a new whole level.

That also leads us to another aspect of hacking your on-page SEO strategy.

Step Two: Use Headings Strategically

The first thing you should do when writing an article (if you want to optimize for the featured snippet) is to list the questions a user might have. How? You got to be creative on that. For instance, you could use Google related searches.

Yet, if you sell a product or service, you could dig into the customer support questions and come up with a list of FAQ.

Once you have your list of questions, keep this in mind: your average reader is like a crawler.

What do I mean? Most readers won’t read the whole article. They will skim it. Therefore, headings will be a way for them to know where to focus their attention. The same applies to Google’s crawlers. When they crawl a page, headings are signals that “tell” crawlers where to focus their effort on trying to index the page.

In other words, headings are some of the most fundamental yet powerful ways to signal to crawlers that a piece of content is relevant.

That is also how the article managed to rank for that featured snippet. The most effective heading to use for the snippet is H2:

Use heading 2 tags

The more H2 tags you use within the article to answer specific questions the more you’re signaling toward search engines where to focus their effort. Therefore, your content’s chances to be a featured snippet increases.

That leads us to the third step.

Step Three: Structured Data

I used a Google Chrome extension called open link structured data sniffer to see what kind of data both Ahrefs had on-page compared to Search Engine Watch.

This is Ahrefs:

Ahrefs structured data

Compared to Search Engine Watch:

search engine watch structured data

Both those websites use structured data. Yet Ahrefs uses JSON-LD while Search Engine Watch uses Microdata.

However, before we go further in-depth, let Google explain what structured data is:

Google Search works hard to understand the content of a page. You can help us by providing explicit clues about the meaning of a page to Google by including structured data on the page. Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on.

Google uses structured data that it finds on the web to understand the content of the page, as well as to gather information about the web and the world in general.

Source: Google Developers

Therefore, structured data helps Google understand what a page or website is about. But the question is, what format does Google like the most?

JSON-LD recommendation

The answer is JSON-LD. In fact, it is a lightweight format that allows the creation of linked data. In short, where you have a piece of content, the JSON-LD translates that in data, which is linked. Therefore, it contributes to building a smarter and better online landscape.

To keep things short, JSON-LD allows open data to be linked. For instance, to say “John Lennon’s spouse is Cynthia Lennon” the JSON-LD expresses it as it follows:

JSON-LD example

Source: json-ld.org

With this kind of strategy, you can have your website “talk” to search engines and thereby increasing your chances of being discovered by your target audience. Coincidentally, there are tools, like WordLift, that allows you to do that in a few clicks.

Connecting the Dots

At times on-page SEO seems more of an art than a science. At least that is how it seems at first sight. However, three things are crucial in my opinion:

  • Switch your mindset from ranking to the snippet
  • Optimize the headings
  • Quality content has to walk hand in hand with the quality of data

In conclusion, by optimizing for the featured snippet you change your mindset. Suddenly, you look for long-tail keywords that resemble questions. By doing so, you also address the most specific problem points that your users and potential customers might have. Therefore, your content goes from generic to specific; from informational to transactional; and from author-centric to reader-centric.

In addition, by using your headings strategically you signal to both users and search engine crawlers where to focus when skimming your content.

Lastly, by combining the quality of content with the quality of data by using the JSON-LD format you make your readers happy while feeding Google’s crawlers what they’re looking for.

That is how you hack your on-page SEO.

Author Bio

Gennaro Cuofano Author Bio Photo Gennaro Cuofano is a Content Marketer and Business Developer at WordLift. After three years in the financial industry in San Diego, California, Gennaro created his blog FourWeekMBA.com where he shares his experiments and findings. He is part of the growth team at WordLift and brings his business insight to spread the value of WordLift and support its community of users.

Three Powerful Strategies to Hack On-Page SEO [Case Study] was originally posted by Video And Blog Marketing

How to Think About Email Capture Forms Like a Customer

What keeps customers from filling out one of your email capture forms? Is it because they don’t believe you will deliver what you say? Is it because it’s too long? Too short?

In this clip from an in-person training session at 2016’s NIO Summit hosted by NextAfter at MECLABS, Austin McCraw talks about the two essential factors that we can influence to produce more leads through our capture forms.

How to Think About Email Capture Forms Like a Customer was originally posted by Video And Blog Marketing

Blurring the Line Between CDN and CMS

Cloudflare recently announced that they’re launching a new feature, called “Cloudflare Workers”. It provides the ability for anybody who’s using Cloudflare as a CDN to write arbitrary JavaScript (based on the standard Service Worker API), which runs on Cloudflare’s edge nodes.

In plain English, you’ll be able to write code which changes the content, headers, look, feel and behaviour of your pages via the Cloudflare CDN. You can do this without making development changes on your servers, and without having to integrate into existing site logic.

If you’re familiar with JavaScript, you can just log into Cloudflare, and start writing logic which runs on top of your server output.

Why is this helpful?

As SEOs, we frequently work with sites which need technical improvements or changes. But development queues are often slow, resources restricted, and website platforms complex to change. It’s hard to get things changed or added.

So many of us have grown comfortable with using workarounds like Google Tag Manager to implement SEO changes – like fixing broken canonical URL tags, or adding robots directives to pages – and hoping that Google respects or understand the conflicting signals we send when we mix on-page and JavaScript-based rules.

But whilst Google professes to be capable of crawling, indexing and understanding JavaScript content and websites, all of the research suggests that they get it wrong as often as they get it right.

Cloudflare’s announcement is significant because, unlike tag management platforms, the alterations are made server-side, before the page is sent to the user – Google only sees the final, altered code and content. There’s no messy JavaScript in the browser, no cloaking, and no conflicting logic.

Service workers on the edge

Cloudflare, like other CDNs, has servers all over the world. When users request a URL on your website, they’re automatically routed to the nearest geographic ‘edge node’, so that users access the site via a fast, local connection. This is pretty standard stuff.

What’s new, however, is that you can now write code which runs at those edge nodes, which allows fine-grained control over how the page is presented to the end user based on their location, or using any logic you care to specify.

With full control over the response from the CDN, it’s possible to write scripts which change title tags, alter canonical URLs, redirect the user, change HTTP headers, or which add completely new functionality; you can adapt, change, delete, build upon or build around anything in the content which is returned from the server.

It’s worth noting that other platforms, like AWS, already launched something like this in July 2017. The concept of making changes at the edge isn’t completely new, but AWS uses a different approach and technology stack.

Specifically, AWS requires users to write functions in Node.js (a common server-side JavaScript framework), using a specific and proprietary approach to how requests/responses are handled. This comes with some advantages (like being able to use some Node.js libraries) but locks you into a very specific approach.

Cloudflare’s solution is based on the Service Worker API (as opposed to Node.js), which might look like a more future-proof approach.

Service workers are the current framework of choice for progressive web apps (PWAs), managing structured markup, and playing with new/emerging formats as Google (and the wider web) moves from favouring traditional websites to embracing more app-like experiences. That makes it a good skill set to learn, to use, and potentially to recycle existing code and solutions from elsewhere in your ecosystem.

That PWAs look likely to be the next (arguably, the current) big thing means that service workers aren’t going anywhere anytime soon, but Node.js might just be the current flavour of the month.

Getting hands-on

Cloudflare provides a sandbox for you to test and visualise changes on any website, though it’s unclear whether this is part of their launch marketing or something which will be around for the long-term (or a component of the editor/deployment system itself).

That’s a lot of power to play with, and I was keen to explore what it looks like in practice.

It took me just a few minutes to modify one of the scripts on their announcement page to add the word ‘awesome’ (in a pleasing shade of orange) to Distilled’s homepage. You can check out the code here.

Whilst this is hugely powerful, it doesn’t come without risks and drawbacks. For a start, you’ll need to have some sharp JavaScript skills to write any rules, and you’re going to have to do it without any external supporting libraries of frameworks (like jQuery).

Service workers can be complex to work with, too. For example, all of your changes are asynchronous; they all run in parallel, at the same time. That makes things lightning fast, but it means that some complex logic which relies on specific ordering or dependencies might be challenging to write and maintain.

And with all of this, there’s also no nice WYSIWYG interface, guides or tutorials (other than general JS or service worker questions on StackOverflow). You’ll be flying by the seat of your pants, spending most of your time trying to work out why your code doesn’t work. And if you need to turn to your developers for help, you’re back at our initial problem – they’re busy, they have other priorities, and you’re fighting for resources.

A meta CMS is not a toy

As we increasingly find ourselves turning to workarounds for long development cycles, issues which “can’t be fixed”, and resolving technical challenges, it’s tempting to see solutions like Google Tag Manager and Cloudflare Workers as viable solutions.

If we can’t get the thing fixed, we can patch over it with a temporary solution which we can deploy ‘higher up the stack’ (a level ‘above’/before the CMS), and perhaps reprioritise and revisit the actual problem at a later date.

You can fix your broken redirects. You can migrate to HTTPS and HTTP/2. You can work through all those minor template errors which the development team will never get to.

But as this way of working becomes habit, it’s not unusual to find that the solutions we’re using (whether it’s Google Tag Manager, Cloudflare, or our own ODN) take on the characteristics of ‘Meta CMSs’; systems which increasingly override our templates, content and page logic, and which use CMS-like logic to determine what the end user sees.

Over time, we build up more and more rules and replacement, until we find that there’s a blurring of lines between which bits of our website and content we manage in each platform.

This creates a bunch of risks and challenges, such as:

  • What happens when the underlying code changes, or when rules conflict?
    If you’re using a tag manager or CDN to layer changes ‘on top’ of HTML code and pages, what happens when developers make changes to the underlying site logic?

    More often than not, the rules you’ve defined to layer your changes break, with potentially disastrous consequences. And when you’ve multiple rules with conflicting directives, how do you manage which ones win?

  • How do you know what does what?
    Writing rules in raw JavaScript doesn’t make for easily readable, at-a-glance understanding of what’s being altered.

    When you’ve got lots of rules or particularly complex scripts, you’ll need a logging or documentation process to provide human-friendly overviews of how all of the moving parts work and interact.

  • Who logs what’s where?
    If conflicts arise, or if you want to update or make new changes you’ll need to edit or build on top of your existing systems. But how do you know which systems – your CMS or your meta CMS – are controlling which bits of the templates, content and pages you want to modify?

    You’ve got rules and logic in multiple places, and it’s a headache keeping track.

    When the CEO asks why the page he’s looking at is broken, how do you begin to work out why, and where, things have gone wrong?

  • How do you do QA and testing?
    Unless your systems provide an easy way to preview changes, and allow you to expose testing URLs for the purposes of QA, browser testing and similar, you’ve got a system with a lot of power and very little quality control. At the moment, it doesn’t look like Cloudflare supports this.

  • How do you manage access and versioning?
    As your rules change, evolve and layer over time, you’ll need a way of managing version control, change logging, and access/permissions. It’s unclear if, or how Cloudflare will attack this at the moment, but the rest of their ecosystem is generally lacking in this regard.

  • How do you prevent accidental exposure/caching/PII etc?
    When you’ve full access to every piece of data flowing to or from the server, you can very easily do things which you probably shouldn’t – even accidentally. It doesn’t take much to accidentally store, save, or expose private user information, credit card transaction details, and other sensitive content.

    With great power comes great responsibility, and just writing-some-javascript can have unintended consequences.

In general then, relying overly on your CDN as a meta CMS feels like a risky solution. It’s good for patching over problems, but it’s going to cause operational and organisational headaches.

That’s not to say that it’s not a useful tool, though. If you’re already on Cloudflare, and you have complex challenges which you can resolve as a one-off fix using Cloudflare Workers, then it’s a great way to bypass the issue and get some easy wins.

Alternatively, if you need to execute geographically specific content, caching or redirect logic (at the closest local edge node to the user), then this is a really great tool – there are definitely use cases around geographically/legally restricted content where this is the perfect tool for the job.

Otherwise, it feels like trying to fix the problem is almost always going to be the better solution. Even if your developers are slow, you’re better off addressing the underlying issues at their source than patching on layers of (potentially unstable) fixes over the top.

Sometimes, Cloudflare Workers will be an elegant solution – more often than not, you should try to fix things the old-fashioned way.

ODN as a meta CMS

Except, there may be an exception to the rule.

If you could have all of the advantages of a meta CMS, but with provisions for avoiding all of the pitfalls I’ve identified – access and version control, intuitive interfaces, secure testing processes, and documentation – you could solve all of your technical SEO challenges overnight, and they’d stay solved.

And whilst I want to stress that I’m not a sales guy, we have a solution.

Our ‘Optimisation Delivery Network’ product (Distilled ODN for short) does all of this, with none of the disadvantages we’ve explored.

We built, and market our platform as an SEO split-testing solution (and it’s a uniquely awesome way to measure the effectiveness of on-page SEO changes at scale), but more interestingly for us, it’s essentially a grown-up meta CMS.

It works by making structured changes to pages, between the request to the server and the point where the page is delivered back to the user. It can do everything that Google Tag Manager or Cloudflare can do to your pages, headers, content and response behaviour.

And it has a friendly user interface. It’s enterprise-grade, it’s scalable, safe, and answers to all of the other challenges we’ve explored.

We have clients who rely on ODN for A/B testing their organic search traffic and pages, but many of these also use the platform to just fix stuff. Their marketing teams can log in, define rules and conditions, and fix issues which it’d typically take months (sometimes years) for development teams to address.

So whilst ODN still isn’t a perfect fix – if you’re in need of a meta CMS then something has already gone wrong upstream – it’s at least a viable, mature and sophisticated way of bypassing clunky development processes and delivering quick, tactical wins.

I expect we’ll see much more movement in the meta CMS market in the next year or so, especially as there are now multiple players in the space (including Amazon!); but how viable their products will be – if they don’t have usable interfaces and account for organisational/operational challenges – is yet to be seen.

In the meantime, you should have a play with Cloudflare’s sandbox, and if you want more firepower and a stronger safety net, get in touch with us for a Distilled ODN demo.

Blurring the Line Between CDN and CMS was originally posted by Video And Blog Marketing

Black Friday SEO Advice to Get More Sales in 2017

hv-blog-on-page Black Friday SEO Advice to Get More Sales in 2017

Black Friday is just around the corner, and for many business owners (small and large) it’s the most profitable time of year. According to Techradar, a whopping $3.34 billion was spend last Black Friday and Cyber Monday. Ready to get your piece of the pie this November? It starts, and ends with optimizing your website. Learn the most effective SEO tips to strengthen your brand, achieve higher sales and surpass your ecommerce dreams!

First Things First: Cheat Your Way to Sales Success (With a Checklist)

It’s not really cheating—in fact, it’s a strategy to blow your competition out of the water so that you reach the true ecommerce sales potential that you deserve. While Black Friday is the busiest shopping day of the year, it can also be the most stressful day of the year as well. How do you guarantee your website won’t crash with high volume traffic? How do you best increase the amount of spending per customer? With this handy checklist, of course! Cross off each task one by one, and you’ll skyrocket your sales just in time for Black Friday:

  • Set up an autoresponder series for abandoned cart visitors. If you already use an automated email marketing series to keep your customer engaged, make sure to add a ‘Black Friday’ campaign. All you have to do is capture those viewers who added your product to their cart (but then left your site). Send a reminder email to them with a special, additional 10% off coupon – if they come back by midnight to complete their checkout.
  • Create a pop-up landing page with a discount code. The average viewer only takes three seconds to view your site. Instantly grab their attention with a compelling pop-up landing page, placed front and center on your homepage. Do you want to create a greater sense of urgency? Design a landing page that includes a timer, counting down the hours, minutes and seconds before your deal expires.
  • Create a banner to display your too-good-to-pass-up Black Friday deal. Banners are easy to create (just head over to canva.com to create a free one). Create a bold heading and font that specifies what your deal is. The more instantly visible your Black Friday deals are, the more viewers are likely to buy.
  • Be absolutely sure your store can deal with the demand. Don’t underestimate the power of your product. When Black Friday comes, consumers (old and new) will want to take advantage of your deals. Check ahead of time with your suppliers, making sure they can handle your surplus of orders.

Now that you have a website equipped for high sales and a low drop-off rate, it’s time to optimize your online business to reach the masses in record time. Apply these five tips to your site ASAP, and you’ll become the epitome of #salesgoals.

Create SEO-Focused Gift Guides

Forget struggling to optimize your sales. One highly effective shortcut is by suggestive gift guide pop-ups as they go through check-out.  According to Google’s trend report from 2016, over 70 percent of online consumers start shopping without having something particular in mind that they want to buy! For example, if you sell women’s clothing, put together a few gift guides that include related interests for women, such as fashion bracelets, necklaces and a subscription to a women’s magazine.

Utilize YouTube

Make a few ‘Black Friday Gift Idea’ videos on YouTube and reach even more potential customers (a whopping 68 percent of consumers turn to YouTube when they don’t know what they want to buy!) Just make sure you follow YouTube’s SEO rules by including no more than 2% density keywords and 700 words—max—in your video description (or your video won’t get uploaded).

Add Popular Keywords for Holiday Gifts in Your Marketing Copy

Black Friday is the perfect time to utilize SEO keywords in all platforms of your marketing copy, including: emails, landing pages, PPC ads, blog posts and product descriptions. Start off your search with ‘Black Friday’, and make sure to also utilize ‘related searches’ for more keyword ideas.

Know Your Buzz Words

Once you have your list of Black Friday keywords, make sure that you add holiday specific buzz words to your marketing copy. For example, it’s not enough to simply describe what your product is with ‘Black Friday’ keywords. You have to hook your audience with additional words like ‘Best,’ ‘Incredible,’ ‘Rare,’ or ‘One of a Kind.’

Peak their curiosity with buzz words, so that whether you’re selling champagne filled chocolates or cashmere sweaters, you’ve hooked them and compelled them to click, buy—and become a customer for years to come.

Force the Masses to Discover Your Sale

While content has been and always will be king, never underestimate the power of a really good image. Many businesses use images in their blog posts or web content, but fail to maximize their exposure by forgetting about the alt tags. This holiday season (as well as any other day of the year) remember that your audience is visually inspired. Add stock photos of a picturesque setting, or of a child receiving the perfect gift on Christmas morning. Find images that evoke Universal desires we all have: a sense of joy, wonder and even magic. Then, take advantage of alt tags so that visitors can find you—and inevitably, the perfect Black Friday deal they just can’t pass up.

Black Friday SEO Advice to Get More Sales in 2017 was originally posted by Video And Blog Marketing