Follow Distilled | SEO Expert Blog on Feedspot

Continue with Google
Continue with Facebook


This blog post is for you if you need a structured way to do competitor analysis that allows you to focus and prioritise tasks without missing out on any important information. It’s also for you if you’ve never done a competitor analysis before or don’t know why you might need to do one.

Some of the reasons you may want to do a competitor analysis are:

  • You used to be the winner in organic search results but you are not anymore
  • You're expanding in a new market (geographically or with a category/service/product)
  • Competitors always outrank you
  • You’re ahead of the game but want to discover why certain competitors are growing

I came up with this method because last year one of our clients asked us to do a competitor analysis but with a very specific question in mind: they were interested only in the Japanese competition, and more specifically they wanted to get this done on their Japanese website. I do not speak Japanese! Nor does anyone else at Distilled (yet!). So, I had to focus on data, numbers, and graphs to get the answers I wanted.

I think of competitor analysis as a set of tasks divided into these two phases:

  1. Discover problems
  2. Find Solutions

The part about finding solutions will vary depending on the problems you discover, which is why I can’t include it here. The steps I include in this method are about discovering problems and as in every SEO project there are multiple tasks you may want to include as part of the analysis. I built a list of steps that are possible to execute regardless of the site’s industry, language, or size; they are listed in a specific order to make sure you don’t get to the end of the puzzle with missing pieces. I’ll go over why and how to do each in detail later on, but here are the things I include in EVERY competitor analysis:

  1. Track keywords targeted on a rank tracker
  2. Analyse ranking results
  3. Backlink analysis (this is nowhere close to a backlink audit)
  4. Topic searches and trends

Let’s dive right in.

Track keywords targeted on a rank tracker

For this step, you will need a list of keywords targeted. If you are doing this for a site in a language you don’t speak, as it was the case for me, you may have to ask your client or you may have someone on the team who has this list. If you’re expanding into a new market and are not sure what keywords to actually target you may want to use a keyword research tool. For example, you can use Ahrefs, Moz, or Semrush, to research keywords used to look for the product/service you are offering (given that you can speak the site’s language).

If you’ve never used a rank tracker before, or cannot pay for one, here is a useful post on how to create your own ranking monitor. At Distilled we use Stat and during this task, I discovered that it can track keywords in different languages, including Japanese, which was great news!

Whenever possible, when I track keywords I categorise them in one or more of the following ways:

  • The page/category/site section they are targeting
  • The topic they belong to
  • Whether they are informational or transactional keywords

In my specific case, due to the language barrier, I couldn’t categorise them. A word of warning - if you are in the same position and you are tempted to use Google Translate - don’t! It will execute some translation pretty accurately but it doesn’t have the level of nuance that we need for informative keywords targeting.

Even though I couldn’t categorise my keywords, I could exclude the branded ones and still obtain a really clear picture of ranking results for the site as a whole. The next step will cover how to analyse results.

Analyse ranking results

With this step I always have in mind two specific goals:

  • Discover all organic ranking competitors: you or your client may have a specific list of sites they think they are or should be ranking against. However, who your client is actually up against on organic results may include sites that were not considered before. Your search competitors are not necessarily your business competitors! This may be true for transactional and informational keywords depending on the content overlap with sites that do not belong to your industry but are still providing relevant information for your customers.
  • Analyse how much traffic competitors are getting: for this I used a simple calculation multiplying search volume by CTR (click-through-rate) based on the position at which each URL is ranking for that keyword.

When I downloaded the traffic data from the rank tracker I compared the number of keywords each competitor was ranking (within the first 20 results) and how much traffic they were getting. When I plot results on a graph, I obtained an output like this:

Above: Comparison of number of times each competitor ranks and amount of traffic they get

What I discovered with this graph was the following:

  1. Competitors A&B had little-to-nothing to do with the industry. This is also good to know at the beginning of the analysis because now:
    1. You know which organic ranking competitors to actually focus on
    2. You might discover in this list of competitors players that you didn’t think of
    3. You may want to discuss with your client some of the keywords targeted (language permitting!)
  2. Competitor H was my client and needless to say the worst performing. This opened up a number of questions and possibilities that is good to discover at the beginning of every competitor analysis. For example, are pages not optimised well enough for the keywords targeted? Are pages optimised but competitors are using additional features, such as structured data, to rank better? This is just the tip of the iceberg.
Keywords to URL mapping

At this stage is also where you can do a keyword-to-URL mapping, matching URLs with the keywords they are ranking for. From the URL, you should be able to tell what the page is about. If the URL is also in a different language you can check hreflang to find the English version of it.

(Tip: if you actually need to check hreflang, scanning the list of URLs on a crawler such a Screaming Frog will easily extract for you the hreflang values.)

When matching keywords to URLs, one of the most important things to think about is whether URLs could and should rank for the keywords they are targeting and ranking for. Think about this:

  • What is the search intent for that keyword?
  • Who has a better page result for that intent?
Backlink Analysis

With this step, I wanted to compare the backlink profile quality among competitors and discover how my client could become more suitable for new high-quality backlinks. I know you may be thinking that comparing domain authority could be just enough to know which domain is stronger. However, Tom Capper, a senior consultant here at Distilled recently wrote a blog post explaining how domain authority is not the right metric for reporting on link building. This is because Google will not rank your pages based on the quality of the domain but based on the quality of the single page.

The main goal with this step is to find opportunities: high-quality pages linking to your competitors more often than to you or your client. I’ve written a blog post explaining how to analyse your competitor's backlinks. By the end of this step, you should have a list of:

  • Quality domains to target to obtain new backlinks - if they link to your competitors they are interested in the industry and likely to link to your client’s site as well
  • Pages that should be improved to make them more suitable for new backlinks
  • Topics to target when creating new content

So far you’ve collected a lot of information about your client and its competitors. It’s important to start a competitor analysis with these steps because they allow you to get a full picture of the most important competitors and what they are doing better. This is what will lead you to find solutions in the second phase of the competitor analysis.

For the last step of finding problems, I list topic searches and trends because that’s another check to discovering problems before you can find solutions. It’s also another step where the language is not a barrier.

Topic searches and Trends

At this point, you should have a clear idea of:

  • Who the most important competitors are
  • Where they are stronger: topics they target, categories heavily linked externally, site sections with better content

When we are making decisions based on search volumes, it is important that we take into account the trend in that search volume. Is this topic something which is popular year round? Is it only popular in the month we happened to do this investigation? Was there a massive spike in interest a few months ago, which is now rapidly declining?

I usually check trends over a 12 months period of time so that I can find out about seasonality. Seasonal content should be strategically built a month or two before the upward trend to make sure Google has enough time to crawl and index it.  Examples of seasonal topics can be anything such as:

  • “What to buy on Valentine’s Day?”
  • “What to write on Mother’s Day card?”

Ideally, you’d want to find evergreen content with a stable interest trend over time. The trend may also look like it had a spike at some point and then levelled down, but the interest remained high and is still relevant:

Google search trend over 12 months within a specific geographic region

A topic like this could be something like “electric vehicles”. This may have had an interest spike when the technology became popular which then levelled down but over time the interest remained because it’s a topic and product that people still search.

Increasing trends are the ideal scenario, however, these are not easy to find and there is no guarantee the increasing trend will continue:

Google search trend over 12 months within a specific geographic region

Stable, high trends are solid topics to target, however, they may be quite competitive:

Google search trend over 12 months within a specific geographic region

While it’s a good idea to target a topic like this, unless you have the strongest domain out of all your competitors it’s worth considering long tail keywords to find a niche audience to target.

By the end of this step you should have:

  • A list of solid topics to target
  • A plan on how to prioritise them based on seasonality and any other information you might have found
Wrapping it all up

You’ve made it to the end of the list and should have a clear picture of the competitor's strengths and areas where your client can improve.  From here is where you can start finding solutions. For example, if the ranking results you discovered show pages ranking for the keywords they should not, page optimisation could be a solution, or keywords research for retargeting could be another. There are many other ways to provide solutions which I will expand on my next blog post.

What did you think? Have I missed anything? What else would you include to discover problems in a competitor analysis for a site, whether you speak the language or not? Let me know in the comments below.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last Friday I had the pleasure of watching John Mueller of Google being interviewed on the BrightonSEO main stage by (Distilled alumna!) Hannah Smith. I found it hugely interesting how different it was from the previous similarly formatted sessions with John I’ve seen - by Aleyda at BrightonSEO previously, and more recently by my colleague Will Critchlow at SearchLove. In this post, I want to get into some of the interesting implications in what John did and, crucially, did not say.

I’m not going to attempt here to cover everything John said exhaustively - if that’s what you’re looking for, I recommend this post by Deepcrawl’s Sam Marsden, or this transcript via Glen Allsopp (from which I’ve extracted below). This will also not be a tactical post - I was listening to this Q&A from the perspective of wanting to learn more about Google, not necessarily what to change in my SEO campaigns on Monday morning.

Looking too closely?

I’m aware of the dangers of reading too much into the minutia of what John Mueller, Garry Ilyes, and crew come out with - especially when he’s talking live and unscripted on stage. Ultimately, as John said himself, it’s his job to establish a flow of information between webmasters and search engineers at Google. There are famously few people, or arguably no people at all, who know the ins and outs of the search algorithm itself, and it is not Jon’s job to get into it in this depth.

That said, he has been trained, and briefed, and socialised, to say certain things, to not say certain things, to focus on certain areas, and so on. This is where our takeaways can get a little more interesting than the typical, clichéd “Google says X” or “we think Google is lying about Y”. I’d recommend this presentation and deck from Will if you want to read more about that approach, and some past examples.

So, into the meat of it.

1. “We definitely use links to recognize new content”

Hannah: Like I said, this is top tier sites...  Links are still a ranking factor though, right? You still use links as a ranking factor?

John: We still use links. I mean it's not the only ranking factor, so like just focusing on links, I don't think that makes sense at all... But we definitely use links to recognize new content.

Hannah: SO if you then got effectively a hole, a very authoritative hole in your link graph... How is that going to affect how links are used as a ranking factor or will it?

John: I dunno, we'll see. I mean it's one of those things also where I see a lot of times the sites that big news sites write about are sites that already have links anyway. So it's rare that we wouldn't be able to find any of that new content. So I don't think everything will fall apart. If that happens or when that happens, but it does make it a little bit harder for us. So it's kind of tricky, but we also have lots of other signals that we look at. So trying to figure out how relevant a page is, is not just based on the links too.

The context here is that Hannah was interested in how much of a challenge it is for Google when large numbers of major editorial sites start adding the “nofollow” attribute to all their external links - which has been a trend of late in the UK, and I suspect elsewhere. If authoritative links are still an important trust factor, does this not weaken that data?

The interesting thing for me here was very much in what John did not say. Hannah asks him fairly directly whether links are a ranking factor, and he evades three times, by discussing the use of links for crawling & discovering content, rather than for establishing a link graph and therefore a trust signal:

  • “We still use links”
  • “We definitely use links to recognize new content”
  • “It’s rare we wouldn’t be able to find any of that new content”

There’s also a fourth example, earlier in the discussion - before the screenshot -  where he does the same:

“...being able to find useful content on the web, links kind of play a role in that.”

This is particularly odd as in general, Google is pretty comfortable still discussing links as a ranking factor. Evidently, though, something about this context caused this slightly evasive response. The “it’s not the only ranking factor” response feels like a bit of an evasion too, given that Google essentially refuses to discuss other ranking factors that might establish trust/authority, as opposed to just relevance and baseline quality - see my points below on user signals!

Personally, I also thought this comment was very interesting and somewhat vindicating of my critique of a lot of ranking factor studies:

“...a lot of the times the sites that big news sites write about are sites that already have links anyway”

Yeah, of course - links are correlated with just about any other metric you can imagine, whether it be branded search volume, social shares, click-through rate, whatever.

2. Limited spots on page 1 for transactional sites

Hannah: But thinking about like a more transactional query, for example. Let's just say that you want to buy some contact lenses, how do you know if the results you've ranked first is the right one? If you've done a good job of ranking those results?

John: A lot of times we don't know, because for a lot of these queries there is no objective, right or wrong. They're essential multiple answers that we could say this could make sense to show as the first result. And I think in particular for cases like that, it's useful for us to have those 10 blue links or even 10 results in the search page, where it's really something like we don't completely know what you're looking for. Are you looking for information on these contact lenses? Do you want to buy them? Do you want to compare them? Do you want to buy a specific brand maybe from this-

This is one of those things where I think I could have figured this out from the information I already had, but it clicked into place for me listening to this explanation from John. If John is saying there’s a need to show multiple intents on the first page for even a fairly commercial query, there is an implication that only so many transactional pages can appear.

Given that, in many verticals, there are far more than 10 viable transactional sites, this means that if you drop from being the 3rd best to the 4th best among those, you could drop from, for example, position 5 to position 11. This is particularly important to keep in mind when we’re analysing search results statistically - whether it be in ranking factor studies or forecasting the results of our SEO campaigns, the relationship between the levers we pull and the outputs can be highly non-linear. A small change might move you 6 ranking positions, past sites which have a different intent and totally different metrics when it comes to links, on-page optimisation, or whatever else.

3. User signals as a ranking factor

Hannah: Surely at that point, John, you would start using signals from users, right? You would start looking at which results are clicked through most frequently, would you start looking at stuff like that at that point?

John: I don't think we would use that for direct ranking like that. We use signals like that to analyze the algorithms in general, because across a million different search queries we can figure out like which one tends to be more correct or not, depending on where people click. But for one specific query for like a handful of pages, it can go in so many different directions. It's really-

So, the suggestion here is that user signals - presumably CTR (click-through rates), dwell time, etc. - are used to appraise the algorithm, but not as part of the algorithm. This has been the line from Google for a while, but I found this response far more explicit and clear than John M’s skirting round the subject in the past.

It’s difficult to square this with some past experiments from the likes of Rand Fishkin manipulating rankings with hundreds of people in a conference hall clicking results for specific queries, or real world results I’ve discussed here. In the latter case, we could maybe say that this is similar to Panda - Google has machine learned what on-site attributes go with users finding a site trustworthy, rather than measuring trust & quality directly. That doesn’t explain Rand’s results, though.

Here are a few explanations I think are possible:

  1. Google just does not want to admit to this, because it’d look spammable (whether or not it actually is)
  2. In fact, they use something like “site recent popularity” as part of the algorithm, so, on a technicality, don’t need to call it CTR or user signals
  3. The algorithm is constantly appraising itself, and adjusts in response to a lot of clicks on a result that isn’t p1 - but the ranking factor that gets adjusted is some arbitrary attribute of that site, not the user signal itself

Just to explain what I mean by the third one a little further - imagine if there are three sites ranking for a query, which are sites A, B, & C. At the start, they rank in that order - A, B, C. It just so happens, by coincidence, that site C has the highest word count.

Lots of people suddenly search the query and click on result C. The algorithm is appraising itself based on user signals, for example, cases where people prefer the 3rd place result, so needs to adjust to make this site rank higher. Like any unsupervised machine learning, it finds a way, any way, to fit the desired outcome to the inputs for this query, which in this case is weighting word count more highly as a ranking factor. As such, result C ranks first, and we all claim CTR is the ranking factor. Google can correctly say CTR is not a ranking factor, but in practice, it might as well be.

For me, the third option is the most contrived, but also fits in most easily with my real world experience, but I think either of the other explanations, or even all 3, could be true.


I hope you’ve enjoyed my rampant speculation. It’s only fair that you get to join in too: tweet me at @THCapper, or get involved in the comments below.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Distilled is all about effective and accountable search marketing. Part of being effective is being able to gather the data we need to diagnose an issue. For a while, we’ve been using a custom crawler at Distilled to solve technical problems with our clients. Today, we’re making that crawler available to you.

This crawler solves three long-standing pain points for our team:

  1. Unhelpful stock reports. Other crawlers limit us to predefined reports. Sometimes these reports don’t answer our questions. This crawler exports to BigQuery, which lets us stay flex.
  2. Limited crawl scope. When crawling on your own computer, your crawl is limited by how much RAM you’ve got. Our crawler is so efficient that you’re more likely to run out of time than memory.
  3. Inflexible schema. Other crawlers generally export flattened data into a table. This can make it hard to analyze many-to-many relationships, like hreflang tags. This crawler outputs complete, non-flattened information for each page. With this data, the queries our team runs are limited only by their imaginations.

Our team still uses both local and hosted crawlers every day. We break out this custom crawler when we have a specific question about a large site. If that’s the case, this has proven to be the best solution.

To use the crawler, you’ll need to be familiar with running your computer from the command line. You’ll also need to be comfortable with BigQuery. This blog post will cover only high-level information. The rest is up to you!

This is not an official Distilled product. We are unable to provide support. The software is open-source and governed by an MIT-style license. You may use it for commercial purposes without attribution.

What it is

We’ve imaginatively named the tool crawl. crawl is an efficient and concurrent command-line tool for crawling and understanding websites. It outputs data in a newline-delimited JSON format suitable for use with BigQuery.

By waiting until after the crawl to analyze data, analysis can be more cost-effective. If you don't try to analyze the data at all as you're collecting it, crawling is much more efficient. crawl keeps track of the least information necessary to complete the crawl. In practice, a crawl of a 10,000-page site might use ~30 MB RAM. Crawling 1,000,000 pages might use less than a gigabyte.

Cloud computing promises that you can pay for the computing power you need, when you need it. BigQuery is a magical example of this in action. For many crawl-related tasks, it is almost free. Anyone can upload data and analyze it in seconds.

The structure of that data is essential. With most crawlers that allow data exports, the result is tabular. You get, for instance, one row per page in a CSV. This structure isn’t great for many-to-many relationships of cross-linking within a website. crawl outputs a single row per page, and that row contains nested data about every link, hreflang tag, header field, and more. Here are some example fields to help you visualize this:

Some fields, like Address, have nested data. Address.Full is the full URL of the page. Other fields, like StatusCode, are simply numbers or strings. Finally, there are repeated fields, like Links. These fields can have any number of data points. Links records all links that appear on a page being crawled.

So using BigQuery for analysis solves the flexibility problem, and helps solve the resource problem too.

Install with Go

Currently, you must build crawl using Go. This will require Go version >1.10. If you’re not familiar with Go, it’ll be best to lean on someone you know who is willing to help you.

go get -u github.com/benjaminestes/crawl/...

In a well-configured Go installation, this will fetch and build the tool. The binary will be put in your $GOBIN directory. Adding $GOBIN to your $PATH will allow you to call crawl without specifying its location.

Valid commands

USAGE: crawl <command> [-flags] [args]

help Print this message.

Crawl a list of URLs provided on stdin.
The -format={(text)|xml} flag determines the expected type.

crawl list config.json <url_list.txt >out.txtcrawl list -format=xml config.json <sitemap.xml >out.txt


Print a BigQuery-compatible JSON schema to stdout.

crawl schema >schema.json


Recursively requests a sitemap or sitemap index from a URL provided as argument.

crawl sitemap http://www.example.com/sitemap.xml >out.txt


Crawl from the URLs specific in the configuration file.

crawl spider config.json >out.txt

Configuring your crawl

The repository includes an example config.json file. This lists the available options with reasonable default values.

"From": [
    "Include": [
    "Exclude": [],

    "MaxDepth": 3,

    "WaitTime": "100ms",
    "Connections": 20,

    "UserAgent": "Crawler/1.0",
    "RobotsUserAgent": "Crawler",
    "RespectNofollow": true,

    "Header": [
{"K": "X-ample", "V":"alue"}

Here’s the essential information for these fields:

  • From. An array of fully-qualified URLs from which you want to start crawling. If you are crawling from the home page of a site, this list will have one item in it. Unlike other crawlers you may have used, this choice does not affect the scope of the crawl.
  • Include. An array of regular expressions that a URL must match in order to be crawled. If there is no valid Include expression, all discovered URLs could be within scope. Note that meta-characters must be double-escaped. Only meaningful in spider mode.
  • Exclude. An array of regular expressions that filter the URLs to be crawled. Meta-characters must be double-escaped. Only meaningful in spider mode.
  • MaxDepth. Only URLs fewer links than MaxDepth from the From list will be crawled.
  • WaitTime. Pause time between spawning requests. Approximates crawl rate. For instance, to crawl about 5 URLs per second, set this to "200ms". It uses Go's time parsing rules.
  • Connections. The maximum number of current connections. If the configured value is < 1, it will be set to 1 upon starting the crawl.
  • UserAgent: The user-agent to send with HTTP requests.
  • RobotsUserAgent. The user-agent to test robots.txt rules against.
  • RespectNofollow. If this is true, links with a attribute will not be included in the crawl.
  • Header. An array of objects with properties "K" and "V", signifying key/value pairs to be added to all requests.

The MaxDepth, Include, and Exclude options only apply to spider mode.

How the scope of a crawl is determined

Given your specified Include and Exclude lists, defined above, here is how the crawler decides whether a URL is in scope:

  1. If the URL matches a rule in the Exclude list, it will not be crawled.
  2. If the URL matches a rule in the Include list, it will be crawled.
  3. If the URL matches neither the Exclude nor Include list, then if the Include list is empty, it will be crawled, but if the Include list is not empty, it will not be crawled.

Note that only one of these cases will apply (as in Go's switch statement, by way of analogy).

Finally, no URLs will be in scope if they are further than MaxDepth links from the From set of URLs.

Use with BigQuery

Run crawl schema >schema.json to get a BigQuery-compatible schema definition file. The file is automatically generated (via go generate) from the structure of the result object generated by the crawler, so it should always be up-to-date.

If you find an incompatibility between the output schema file and the data produced from a crawl, please flag as a bug on GitHub.

In general, you’ll save crawl data to a local file and then upload to BigQuery. That involves two commands:

$ crawl spider config.json >output.txt 

$ bq load --source_format=NEWLINE_DELIMITED_JSON dataset.table output.txt schema.json

Crawl files can be large, and it is convenient to upload them directly to Google Cloud Storage without storing them locally. This can be done by piping the output of crawl to gsutil:

$ crawl spider config.json | gsutil cp - gs://my-bucket/crawl-data.txt

$ bq load --source_format=NEWLINE_DELIMITED_JSON dataset.table gs://my-bucket/crawl-data.txt schema.json
Analyzing your data

Once you’ve got your data into BigQuery, you can take any approach to analysis you want. You can see how to do interactive analysis in the example notebook.

In particular, take a look at how the nested and repeated data fields are used. With them, it’s possible to generate reports on internal linking, canonicalization, and hreflang reciprocation.

Bugs, errors, contributions

All reports, requests, and contributions are welcome. Please handle them through the GitHub repository. Thank you!

This is not a Distilled product. We are unable to provide support. The software is open-source and governed by an MIT-style license. You can use it for commercial purposes without attribution.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You’ve done your keyword research, your site architecture is clear and easy to navigate, and you’re giving users really obvious signals about how and why they should convert. But for some reason, conversion rates are the lowest they’ve ever been, and your rankings in Google are getting worse and worse.

You have two things in the back of your mind. First, recently a customer told your support team that the site was very slow to load. Second, Google has said that it is using site speed as part of how rankings are calculated.

It’s a common issue, and one of the biggest problems about site speed is it is so hard to prove it’s making the difference. We often have little-to-no power to impact site speed (apart from sacrificing those juicy tracking snippets and all that content we fought so hard to add in the first place). Even worse - some fundamental speed improvements can be a huge undertaking, regardless of the size of your dev team, so you need a really strong case to get changes made.

Sure, Google has the site speed impact calculator which gives an estimate of how much revenue you could be losing for loading more slowly, and if that gives you enough to make your case - great! Crack on. Chances are, though, that isn’t enough. A person could raise all kinds of objections, for instance;

  1. That’s not real-world data

    1. That tool is trying to access the site from one place in the world, our users live elsewhere so it will load faster for them
    2. We have no idea how the tool is trying to load our site, our users are using browsers to access our content, they will see different behaviour
  2. That tool doesn’t know our industry
  3. The site seems pretty fast to me
  4. The ranking/conversion/money problems started over the last few months - there’s no evidence that site speed got worse over that time.

Tools like webpagetest.org are fantastic but are usually constrained to accessing your site from a handful of locations

Pretty much any site speed checker will run into some combination of the above objections. Say we use webpagetest.org (which wouldn’t be a bad choice), when we give it a url, an automated system accesses our site tests how long it takes to load, and reports to us on that. As I say, not a bad choice but it’s very hard to to test accessing our site from everywhere our users are, using the browsers they are using, getting historic data that was recording even when everything was hunky-dory and site speed was far from our minds, and getting comparable data for our competitors.

Or is it?

Enter the Chrome User Experience (CRUX) report

In October 2017 Google released the Chrome User Experience report. The clue is in the name - this is anonymised domain-by-domain, country-by-country site speed data they have been recording through real-life Google Chrome users since October 2017. The data only includes records from Chrome users which have opted into syncing browser history, and have usage statistic reporting enabled, however many will have this on by default (see Google post). So this resource offers you real-world data on how fast your site is.

That brings us to the first thing you should know about the CRUX report.

1. What site speed data does the Chrome User Experience report contain?

In the simplest terms, the CRUX report gives recordings of how long it took your webpages to load. But loading isn't on-off, even if you're not familiar with web development, you will have noticed that when you ask for a web page, it thinks a bit, some of the content appears, maybe the page shuffles around a bit and eventually everything falls into place.

Example of a graph showing performance for a site across different metrics. Read on to understand the data and why it’s presented this way.

There are loads of reasons that different parts of that process could be slower, which means that getting recordings for different page load milestones can help us work out what needs work.

Google’s Chrome User Experience report gives readings for a few important stages of webpage load. They have given definitions here but I’ve also written some out below;

  • First Input Delay

    • This is more experimental, it's the length of time between a user clicking a button and the site registering the click
    • If this is slow the user might think the screen is frozen
  • First Paint

    • The first time anything is loaded on the page, if this is slow the user will be left looking at a black screen
  • First Contentful Paint

    • Similar to first paint, this is the first time any user-visible content is loaded onto the screen (i.e. text or images).
    • As with First Paint, if this is slow the user will be waiting, looking at a blank screen
  • DOM Content Loaded

    • This is when all the html has been loaded. According to Google, it doesn’t include CSS and all images but by-and-large once you reach this point, the page should be usable, it’s quite an important milestone.
    • If this is slow the user will probably be waiting for content to appear on the page, piece by piece.
  • Onload

    • This is the last milestone and potentially a bit misleading. A page hits Onload when all the initial content has finished loading, which could lead you to believe users will be waiting for Onload. However, many web pages can be quite operational, as the Emperor would say, before Onload. Users might not even notice that the page hasn’t reached Onload.
    • To what extent Onload is a factor in Google ranking calculations is another question but in terms of User Experience I would prioritise the milestones before this.

All of that data is broken down by;

  • Domain (called ‘origin’)
  • Country
  • Device - desktop, tablet, mobile (called ‘client’)
  • Connection speed

So for example, you could see data for just visitors to your site, from Korea, on desktop, with a slow connection speed.

2. How can I access the Chrome User Experience report?

There are two main ways you can access Google’s Chrome user site speed data. The way I strongly recommend is getting it out using BigQuery, either by yourself or with the help of a responsible adult.


If you don’t know what BigQuery is, it’s a way of storing and accessing huge sets of data. You will need to use SQL to get the data out but that doesn’t mean you need to be able to write SQL. This tutorial from Paul Calvano is phenomenal and comes with a bunch of copy-paste code you can use to get some results. When you’re using BigQuery, you’ll ask for certain data, for instance, “give me how fast my domain and these two competitors reach First Contentful Paint”. Then you should be able to save that straight to Google Sheets or a csv file to play around with (also well demonstrated by Paul).


The other, easier option, which I actually recommend against is the CRUX Data Studio dashboard. On the surface, this is a fantastic way to get site speed data over time. Unfortunately, there are a couple key gotchas for this dashboard which we need to watch out for. As you can see in the screenshot below, the dashboard will give you a readout of how often your site was Fast, Average, or Slow to reach each loading point. That is actually a pretty effective way to display the data over time for a quick benchmark of performance. One thing to watch out for with Fast, Average, and Slow is that the description of the thresholds for each isn’t quite right.

If you compare the percentages of Fast, Average, and Slow in that report with the data direct from BigQuery they don’t line up. It’s an understandable documentation slip but please don’t use those numbers without checking them. I’ve chatted with the team and submitted a bug report on the Github for this tool . I’ve also listed the true definitions below, in case you want to use Google’s report despite the compromises, or use the Fast, Average, Slow categorisations in the reports you create (as I say, it’s a good way to present the data). The link to generate one of these reports is g.co/chromeuxdash.

Another issue is that it uses the “all” dataset - meaning data from every country in the world. That means data from US users is going to be influenced by data from Australian users. It’s an understandable choice given the fact that this report is free, easily generated, and probably took a bunch of time to put together, but it’s taking us further away from that real-world data we were looking for. We can be certain that internet speeds in different countries will vary quite a lot (for instance South Korea is well known for having very fast internet speeds) but also that expectations of performance could vary by country as well. You don’t care if your site speed looks better than your competitor because you’re combining countries in a convenient way, you care if your site is fast enough to make you money. By accessing the report through BigQuery we can select data from just the country we’re interested in and get a more accurate view.

The final big problem with the Data Studio dashboard is it lumps desktop results in with mobile and tablet. That means that even looking at one site over time, it could look like your site speed has taken a major hit one month just because you happened to have more users on a slower connection that month. It doesn’t matter whether desktop users tend to load your pages faster than mobile, or vice versa - if your site speed dashboard can make it look like your site speed is drastically better or worse because you’ve started a facebook advertising campaign that’s not a useful dashboard.

The problems get even worse if you’re trying to compare two domains using this dashboard - one might naturally have more mobile traffic than the other, for example. It’s not a direct comparison and could actually be quite misleading. I’ve included a solution to this in the section below, but it will only work if you’re accessing the data with BigQuery.

Wondering why the Data Studio dashboard reports % of Fast, Average, and Slow, rather than just how long it takes your site to reach a certain load point? Read the next section!

3. Why doesn’t the CRUX report give me one number for load times?

This is important - your website does not have one amount of time that it takes to load a page. I’m not talking about the difference between First Paint or Dom Content Loaded, those numbers will of course be different. I’m talking about the differences within each metric every single time someone accesses a page.

It could take 3 seconds for someone in Tallahassee to reach Dom Content Loaded, 2 seconds for someone in London. Then another person in London loads the page on a different connection type, Dom Content Loaded could take 1.5 seconds. Then another person in London loads the page when the server is under more stress, it takes 4 seconds. The amount of time it takes to load a page looks less like this;


Median result from webpagetest.org

And more like this;

Distribution of load times for different page load milestones

That chart is showing a distribution of load times. Looking at that graph, you could think 95% of the time, the site is reaching DOM Content Loaded in under 8 seconds. On the other hand you could look at the peak and say it most commonly loads in around 1.7 seconds, but you could, for example see a strange peak at around 5 seconds and realise - something is intermittently going wrong that means sometimes the site takes much longer to load.

So you see saying “our site loads in X seconds, it used to load in Y seconds” could be useful when you’re trying to deliver a clear number to someone who doesn’t have time to understand the finer points, but it’s important for you to understand that performance isn’t constant and your site is being judged by what it tends to do, not what it does under sterile testing conditions.

4. What limitations are there in the Chrome User Experience report?

This data is fantastic (in case you hadn’t picked up before, I’m all for it) but there are certain limitations you need to bear in mind.

No raw numbers

The Chrome User Experience report will give us data on any domain contained in the data set. You don’t have to prove you own the site to look it up. That is fantastic data, but it’s also quite understandable that they can’t get away with giving actual numbers. If they did, it would take approximately 2 seconds for an SEO to sum all the numbers together and start getting monthly traffic estimates for all of their competitors.

As a result, all of the data comes as a percentage of total throughout the month, expressed in decimals. A good sense check when you’re working with this data is that all of your categories should add up to 1 (or 100%) unless you’re deliberately ignoring some of the data and know the caveats.

Domain-level data only

The data available from BigQuery is domain-level only, we can’t break it down page-by-page which does mean we can’t find the individual pages which load particularly slowly. Once you have confirmed you might have a problem, you could use a tool like Sitebulb to test page load times en-masse to get an idea of which pages on your site are the worst culprits.

No data at all when there isn’t much data

There will be some sites which don’t appear in some of the territory data sets, or at all. That’s because Google hasn’t added their data to the dataset, potentially because they don’t get enough traffic.

Losing data for the worst load times

This data set is unlikely to be effective at telling you about very very long load times. If you send a tool like webpagetest.org to a page on your site, it’ll sit and wait until that page has totally finished loading, then it’ll tell you what happened.

When a user accesses a page on your site there are all kinds of reasons they might not let it load fully. They might see the button they want to click early on and click on it before too much happened, if it’s taking a very long time they might give up altogether.

This means that the CRUX data is a bit unbalanced - the further we look along the “load time” axis, the less likely it is it’ll include representative data. Fortunately, it’s quite unlikely your site will be returning mostly fast load times and then a bunch of very slow load times. If performance is bad the whole distribution will likely shift towards the bad end of the scale.

The team at Google have confirmed that if a user doesn’t meet a milestone at all (for instance Onload) the recording for that milestone will be thrown out but they won’t throw out the readings for every milestone in that load. So, for example, if the user clicks away before Onload, Onload won’t be recorded at all, but if they have reached Dom Content Loaded, that will be recorded.

Combining stats for different devices

As I mentioned above - one problem with the CRUX report is all of the reported data is as a percentage of all requests.

So for instance, it might report that 10% of requests reached First Paint in 0.1 seconds. The problem with that is that response times are likely different for desktop and mobile - different connection speeds, processor power, probably even different content on the page. But desktop and mobile are lumped together for each domain and in each month, which means that a difference in the proportion of mobile users between domains or between months can mean that site speed could even look better, when it’s actually worse, or vice versa.

This is a problem when we’re accessing the data through BigQuery, as much as it is if we use the auto-generated Data Studio report, but there’s a solution if we’re working with the BigQuery data. This can be a bit of a noodle-boiler so let’s look at a table.

Device Response time (seconds) % of total
Phone 0.1 10
Desktop 0.1 20
Phone 0.2 50
Desktop 0.2 20

In the data above, 10% of total responses were for mobile, and returned a response in 0.1 seconds. 20% of responses were on desktop and returned a response in 0.1 seconds.

If we summed that all together, we would say 30% of the time, our site gave a response in 0.1 seconds. But that’s thrown off by the fact that we’re combining desktop and mobile which will perform differently. Say we decide we are only going to look at desktop responses. If we just remove the mobile data (below), we see that, on desktop, we’re equally likely to give a response at 0.1 and at 0.2 seconds. So actually, for desktop users we have a 50/50 chance. Quite different to the 30% we got when combining the two.

Device Response time (seconds) % of total
Desktop 0.1 20
Desktop 0.2 20

Fortunately, this sense-check also provides our solution, we need to calculate each of these percentages, as a proportion of the overall volume for that device. While it’s fiddly and a bit mind-bending, it’s quite achievable. Here are the steps;

  1. Get all the data for the domain, for the month, including all devices.
  2. Sum together the total % of responses for each device, if doing this in Excel or Google Sheets, a pivot table will do this for you just fine.
  3. For each row of your original data, divide the % of total, by the total amount for that device, e.g. below

Percent by device

Device % of total
Desktop 40
Phone 60

Original data with adjusted volume

Device Response time (seconds) % of total Device % (from table above) Adjusted % of total
Phone 0.1 10 60 10% / 60% = 16.7%
Desktop 0.1 20 40 20% / 40% = 50%
Phone 0.2 50 60 50% / 60% = 83.3%
Desktop 0.2 20 40 20% / 40% = 50%
5. How should I present Chrome User Experience site speed data?

Because none of the milestones in the Chrome User Experience report have one number as an answer, it can be a challenge to visualise more than a small cross section of the data. Here are some visualisation types that I’ve found useful.

% of responses within “Fast”, “Average”, and “Slow” thresholds

As I mention above, the CRUX team have hit on a good way of displaying performance for these milestones over time. The automatic Data Studio dashboard shows the proportion of each metric over time, that gives you a way to see if a slowdown is a result of being Average or Slow more often, for example. Trying to visualise more than one of the milestones on one graph becomes a bit messy so I’ve found myself splitting out Fast, and Average so I can chart multiple milestones on one graph.

In the graph above, it looks like there isn’t a line for First Paint but that’s because the data is almost identical for that and First Contentful Paint

I’ve also used the Fast, Average, and Slow buckets to compare a few different sites during the same time period, to get a competitive overview.

Comparing competitors “Fast” responses by metric

An alternative which Paul Calvano demonstrates so well is histograms. This helps you see how distributions break down. The Fast, Average, and Slow bandings can hide some sins in that movement within those bands will still impact user experience. Histograms can also give you an idea..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Distilled Optimization Delivery Network (ODN) is most famous for SEO A/B testing and more recently full-funnel testing. But fewer people are familiar with one of the other main features; the ability to act as a meta-CMS and change pretty much anything you want in the HTML of your site, without help from your development team or writing tickets. DistilledODN is platform independent, sitting between your website servers and website visitors, similar to a Content Delivery Network (CDN), as shown in the below diagram.

This use case for ODN has been popular for many of our enterprise clients who have restrictions on their ability to make on-the-fly changes to their websites for a variety of reasons. A picture (or a gif) is worth a thousand words, so here are 10 common website changes you can make using ODN that you may not be aware of.

We’ve used a variety of websites and brands that use different platforms and technologies to show anyone can make use of this software regardless of your CMS or technology stack.

Before we get started, there is some jargon you will want to understand:

Site section: A site section is the group of pages that we want to make a specific change to

Global rules: These are rules that you want to apply to all pages within a site section as opposed to only a percentage of pages (like you would with an experiment). An example might be something like “Insert self-referencing canonical”. Rules are made up of individual steps.

Steps: These are nested within global rules, and are the steps you have to take to get to the end goal. Some global rules will only have one step, others can have much more.

In the example global rule above, the steps could be something like, “Remove existing canonical”, “Replace with self-referencing canonical”

On-page values: On-page values are constant values that we extract them from the pages in the site section. You can use these in steps. So for the above rule, we’d have to create two on-page values the “existing canonical” and the “path” of the URL we want to add the self-referencing canonical to. An example site where we’ve done this is included below.

The image below shows how these different components interact with each other.

If you’d like a more detailed explanation about any of this stuff, a good place to start is this blog post; what is SEO split-testing.

Now that you’re familiar with the terminology, here are our 10 common website changes made with ODN, with GIFs:

1. Forever 21 – Trailing slash redirect

Having URLs that return a 200 status code for both the trailing slash and non-trailing slash versions can lead to index bloat and duplicate content issues. On Forever21’s homepage, you can see both “/uk/shop” and “/uk/shop/” are 200 pages.

To fix this using ODN, we create a site section that has the homepage entered as the page we want our global rule to apply to.

Then we need to create an on-page value for the page without a trailing slash. In this example, we’ve extracted this value using regex. Having this value defined means that this fix would be easy to apply to a bulk set of URLs on the website if necessary.

Next, we create our global rule. This rule only has one step, to redirects the URL in our site section to the one created using the on-page value, {{path_without_trailing_slash}}.

2. SmartWater Technology – Duplicate home page redirects

Often, websites will have multiple versions of their homepage that return status codes, like when they have both an http:// version and an https:// version, or a www version and a non-www version. This is a problem because it means the authority of your strongest page is split across two URLs. It also means you may have a non-desirable version ranking in search results.

We can see this on SmartWater Technology’s homepage. We can fix this problem by deploying ODN on the non-www version of their site, and creating a site section for the homepage. We only have one page we want to work on in this example, so we don’t need to create any additional on-page values.

We then set up a global rule to redirect the non-www version of the homepage to the www version, which has one step. In the step we select to redirect the URL in our path list (the homepage), to the new destination we’ve entered, https://www.smartwater.com/.

3. Bentley – Adding self-referencing canonicals

As mentioned in the introduction, we can use ODN to insert self-referencing canonicals on a list of pages. We’ve done this with Bentley Motors as an example, which doesn’t have a canonical on their homepage (or any other pages).

We can fix this by setting a global rule with one step to insert this block of HTML after the <title> element:

<link href="https://www.bentleymotors.com{{path}}">

We didn’t have to create an on-page value for {{path}}, since it was created by entering the homepage in our path list. This rule will add a self-referencing canonical to any page that we include in our site section.

If we wanted to, we can also use ODN to apply canonicals that aren’t self-referencing by mapping out the pages we want to add canonicals to, with their canonical page as a value created with a csv upload.

4. Patagonia – Fixing soft 404s

Patagonia uses this landing page, that returns a 200 status code, for 404s, rather than a page that returns a genuine 404 status code. The problem with using soft-404s such as the one Patagonia uses is it won’t send the 404 signal to crawlers, even if the content on the page has the 404 message. This means search engines will see this as a real page, preventing the URL you intended to delete from being removed from the index.

To fix this using ODN, I’ve created a site section with the page path /404/. If you have multiple pages that are soft 404s, you can use other methods to define the pages in the site section. For example, you could match on any page that has “Page Not Found” in the title, or for Patagonia, we could use regex to match on any url that contains “/404/” in it.

Once we’ve defined what pages we want in our site section, we create a global rule with one step that changes the status code from 200 to 404.

5. Amazon Jobs – Changing 302s to 301s

When a redirect is truly temporary, using a 302 status code instead of a 301 makes sense; but if you’re not planning on reverting back to the original URL, using a 302 instead of a 301 redirect means you aren’t passing link equity from one URL to the next.

Once again, this fix is simple to deploy using ODN. We have done it with Amazon Jobs in the GIF below. First, we’ve created a site section with path of the URL we want to change the status code of. I have also changed the response code to match 302 rather than 200, which is the default for ODN.

Again, no need to create an on-page value in this instance. All that’s required is a global rule with one step, to change the status code on those URLs that match what we have in our path list from 302 to 301.

6. Etsy – Changing sitewide links that 30x/404

When you have a sitewide link that has a 30x or 404 status code, it not only might be a frustrating experience for users, it can also have a negative impact on your SEO. If a heavily linked to page on your site has a 301 redirect, for example, you are preventing it from being passed all the link equity available to it.

To fix this with ODN, we can replace the 301 link with the destination 200 link. We have done this on Etsy’s homepage in the GIF below.

First, we create a site section for the homepage, then a global rule with a step to replace the old blog URL. This step replaces the content of the element we’ve selected using a CSS selector with the HTML in the box.

In this case the css selector we have used is “a[href="https://www.distilled.net/blog/uk/?ref=ftr"]”. Using the test feature, we can see this selector grabs the element “<a href="https://www.distilled.net/blog/uk/?ref=ftr"> <span>Etsy blog</span> </a>”. That’s what we are looking to replace.

We then set it to replace the above element with “<a href="https://blog.etsy.com/uk/?ref=ftr"> <span>Etsy blog</span> </a>”, which has the link to the 200 version of Etsy’s blog. Now the footer link goes to the blog.etsy URL rather than the 301 /blog/uk/?ref=ftr URL.  

7. Pixel Eyewear – Adding title tags

Changing title tags is often a desire for content creators, as metadata is one of the strongest signals you can send to Google on what your page is about and what keywords you want to target.

Say you worked at Pixel Eyewear, and after some keyword research decided you wanted to target the keyword “computer screen glasses”, rather than simply “computer glasses”. We can use ODN to make that update, and again this rule can easily be set to target a bulk set of pages.

In the path list, we include all the URLs we want this change to apply to. Then we create a global rule to add “Screen” to our page titles. This has one step, where we use the CSS selector to select the title element of the page. We then enter the HTML we want instead.

8. Pixel Eyewear – Adding content to product pages

This is an example of when a site section has multiple rules. Say that you worked at Pixel Eyewear, and you also wanted to update the descriptions on your product pages, in addition to adding “Screen” to your page titles, and you want to do this on the same pages included in the previous section.  

To do this with ODN, we create a second global rule to edit the product description. This uses a different CSS selector, “div[class="pb-3"]”. You just want the main description to be more descriptive, so you replace the first paragraph of the element “Meet the most advanced eyewear engineered for the digital world.” to “Our most popular product, the Capra will have you looking stylish while wearing the most advanced eyewear engineered for the digital world.”

Since there are two global rules in this section, the order you place them in will matter. ODN works from top to bottom, as shown in the diagram in the intro, so it will apply the first global rule and its steps first before moving to the second. If one of your global rules depends on something created in another, you want to be sure that global rule is listed first.

9. Liberty London – Adding meta descriptions

Meta descriptions are an important meta property to entice users to click through to your webpage from the SERP, but it’s common for website owners to not have them at all, or on important pages on their site, as seen with Liberty London on their UK featured page.

We can edit the meta description content with ODN, and insert a description. First, we include the path of the target page in our path list, then create a global rule with a single step that grabs the meta description with a CSS selector. This time we set it to “Set or update the attribute of an element.” The attribute we want to replace is the content, and we want to replace it with the content entered.

This can also be used to add in meta descriptions when they’re missing entirely, or when you want to insert new ones. If you want to apply in bulk, you can upload a CSV that has the desired meta descriptions for each target URL as a value.

10. CamelBak – Removing duplicate content

E-commerce and other websites frequently wind up with duplicate content on their websites, which can lead to drops in traffic and rankings. Faceted navigation is a common culprit. We can see this in action on Camelbak’s website, where parametered URLs like https://international.camelbak.com/en/bottles/bottle-accessories?sortValue=af41b41832b34f02975423ad5ad46b1e return 200 status codes and have no canonical tags.

We’ve fixed this in ODN by adding canonical tags to the non-parameterized URL. First, we add the relevant URL paths to our path list. Then we need to create an on-page value for the non-parameterized version of the URL. This rule uses regex to extract the content of the URL that comes before the “?” character.

Once we have this on-page value, we can use it in our global rule. Since there are no canonicals already, this global rule has one step. If there were already canonicals on these pages, self-referencing ones, for example, that still referred to the parameterized URL, then we’d have to remove that canonical before we could add in a new one.

The step to add in the canonical inserts a block of HTML after the <title> element. Then we enter the HRML that we want to be inserted. You can see that this uses the on-page value we created, giving us this string:

<link href="https://international.camelbak.com{{url_without_parameters}}"/>

Because we’ve used an on-page value, we put a list of paths for relevant parameterized URLs in our path list, and it will insert a canonical to their non-parameterized parent.

This tactic can be adjusted to account for pagination with rel=”prev” and rel=”next” tags and many other variations. Another way to address duplicate content issues with ODN is to redirecting unwanted URLs, among others.


These examples are only a selection of the types of fixes ODN can employ for your website. There are many more, in addition to being able to perform SEO A/B testing and full-funnel testing. The ability to create custom values and use CSS selectors means there’s a lot of room for any of these fixes to be customized to meet the needs of your website.

If you work on a website that has a difficult time being able to make these kinds of changes (you’re not the only one), then get in touch to get a free demo of our platform in action on your website.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I decided to write this post when I saw, in late February, this poll on Twitter:


Twitter polls may not be the most scientific way that one could research the state of the industry, but it did remind me how common this is - I routinely receive outreach emails or see RFPs & case studies that discuss link-building in terms of Moz's DA (Domain Authority) metric - typically how many links of above a certain DA were or could be built.

Before we go any further: I am not objecting to DA as a useful SEO metric. There are plenty of applications it’s perfectly suited to. I'm also not writing here about the recent updates to DA, although I sadly probably do need to clarify that DA is only a third party metric, which is designed to reflect Google - it does not itself impact rankings.

Rather, it’s the specific use-case, of using DA to measure the success of link-building, that I’m ranting about in this post.

Why do we all use DA?

I think I get why DA has become popular in this space. In my opinion, it has a couple of really big advantages:

  • Timeliness - if a journalist writes an article about your brand or your piece, it’ll take a while for page-level metrics to exist. DA will be available instantly.

  • Critical mass - we’re all really familiar with this metric. A couple of years ago, my colleague Rob Ousbey and I were talking to a prospective client about their site. We’d only been looking at it for about 5 minutes. Rob challenged me to guess the DA, and he did the same. We both got within 5 of the real value. That’s how internalised DA is in the SEO industry - few other metrics come close. Importantly, this has now spread even beyond the SEO industry, too - non SEO-savvy stakeholders can often be expected to be familiar with DA.

So, if in the first week of coverage, you want to report on DA - fine, I guess I’ll forgive you. Similarly, further down the line, if you’re worried your clients will expect to see DA, maybe you can do what we do at Distilled - report it alongside some more useful metrics, and, over time, move expectations in a different direction.

What’s wrong with reporting on DA?

If you’re building links for SEO reasons, then you’re doing it because of PageRank - Google’s original groundbreaking algorithm, which used links as a proxy for the popularity & trustworthiness of a page on the web. (If this is news to you, I recommend this excellent explainer from Majestic’s Dixon Jones.)

Crucially, PageRank works at a page level. Google probably does use some domain-level (or “site”-level) metrics as shortcuts when assessing how a page should rank in its results, but when it comes to passing on value to other pages, via links, Google cares about the strength of a page.

Different pages on a domain can have very different strengths, impacting their ability to rank, and their ability to pass value onwards. This is the exact problem that much of technical SEO is built around. It is not at all simple, and it has significant implications for linkbuilding.

Many editorial sites now include backwater sections, where pages may be 5 or more clicks from the site’s homepage, or even unreachable within the internal navigation. This is admittedly an extreme case, but the fact that the page is on a DA 90+ site is now irrelevant - little strength is being conveyed to the linking page, and the link itself is practically worthless.

The cynic in me says this sort of scenario, where it exists, is intentional - editorial sites are taking SEOs for a ride, knowing we will provide them with content (and, in some cases, cash…) in return for something that is cheap for them to give, and does us no good anyway.

Either way, it makes DA look like a pretty daft metric for evaluating your shiny new links. In the words of Russ Jones, who, as the person who leads the ongoing development of  DA, probably knows what he’s talking about:


What should I use, then?

Here’s a few potential candidates you could move towards:

  • URL-level Majestic Citation Flow - this is the closest thing left to a third party approximation of PageRank itself.

  • Moz Page Authority (PA) - if you’re used to DA, this might be the easiest transition for you. However, Russ (mentioned above) warns that PA is designed to estimate the ability of a URL to rank, not the equity it will pass on.

  • Linking Root Domains to linking page - arguably, the most valuable links we could build are links from pages that themselves are well linked to (for example, multiple sites are referencing a noteworthy news article, which links to our client). Using this metric would push you towards building that kind of link. It’s also the metric in the Moz suite that Russ recommended for this purpose.

  • Referral traffic through the link - I’ve written before about how the entire purpose of PageRank was to guess who was getting clicks, so why not optimise for what Google is optimising for? The chances are a genuine endorsement, and thus a trustworthy link, is one that is actually sending you traffic. If you use Google Analytics, you can use the Full Referrer secondary dimension in your “Referrals” or “Landing Pages” reports to get deeper insight on this.

For example, here’s the recent referral traffic to my blog post How to Rank for Head Terms:

I still might want to check that these links are followed & remain live, of course!

What about in-house / proprietary / agency metrics?

I’m sure plenty of people are using their own, calculated metrics, to take the best of all worlds. I think there’s merit to this approach, but it’s not one we use at Distilled. This is for two reasons:

  • "Worst of both" - There’s a potential for a calculated metric to use both domain and page-level metrics as part of its formula. The trouble with this is that you get the downsides of both - the measurement-lag of a page-level metric, with the inaccuracy of a domain-level metric.

  • Transparency - Our prospective clients should hopefully trust us, but this is still going to be harder for them to explain up their chain of command than a metric from a recognised third party. Given the inherent difficulties and causation fallacies in actually measuring the usefulness of a link, any formula we produce will internalise our own biases and suspicions.

Strongly, nay, VEHEMENTLY disagree?

Great! Let me know in the comments, or on Twitter ;)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

BEFORE READING: If you’re unfamiliar with JSON-LD, check out this article to learn more about the structured data format and why we prefer it. Overall, JSON-LD is much more flexible and scalable in comparison to microdata. Rather than having to add and edit microdata spread throughout the HTML of a page, you can add and edit JSON-LD in one place, wherever it’s pasted in the HTML.

While compiling recommendations for structured data for a client, many questions came up. After looking through countless articles on schema markup for e-commerce and going knee-deep into Schema.org, I still came up short when trying to find answers to my questions. Whether you’re working on implementing structured data for your own e-commerce site or a client’s site, here are 7 things that will help you along your journey.

1. When in doubt, test it out. Test and discover structured data opportunities with Google’s Structured Data Testing Tool

If you’re unsure whether your structured data is valid or is free from errors, use Google’s Structured Data Testing Tool. It can confirm whether Google sees the markup on a page. If you’re missing any recommended (displayed as “warnings”) or required values (displayed as “errors”) as part of the structured data type, Google will tell you so. Additionally, the tool will report any syntax errors in your code.

Another great aspect of the testing tool is the ability to view the structured data used on competitor sites. This can be great if you’re unsure where to start or what schema types are relevant for your e-commerce pages. You can even learn from their errors and warnings.

2. Add structured data to your site based on templates

Whether you’re the one adding structured data to your e-commerce site or the one making recommendations, it can be overwhelming to think about all the schema markup needed across the many pages of your site. Rather than thinking about this page by page, approach it from a template level: product categories, products, contact, about, and so on. Also include a universal template, that is, structured data that would appear on all pages (such as BreadcrumbList).  

Delivering templated structured data to a client or your team can also aid in communication with developers and make it easier to implement changes.

3. Do you need to add review markup?

We often come across clients that use third-party apps to collect and display product reviews. We get a lot of questions about review markup and whether to include it as part of their product markup. Review markup should always be included in your product markup as it is a recommended field. But do you need to add it yourself? Here’s a visual to help answer that question.

4. Use “reviewBody” to markup review text in Review

When taking a look at the examples included at the bottom of Schema.org’s Review schema, one example review markup uses “description” and the other uses “reviewBody”. They both appear to be review text. So, which one should you use?

I would recommend using “reviewBody” for review text as it is a property of Review, whereas “description” is a property of Thing. The description for “reviewBody” (“The actual body of the review”) seems to fit review text more closely than “description” (“A description of the item”). Furthermore, when comparing with Google Developers’ guide on review snippets, they used “reviewBody” for the body of a review.

5. Use product markup on product category pages

Category pages can include products, yet it’s not quite a product page. About two years ago, Distilled, using an SEO split test conducted on our ODN platform, experienced positive results when including Product schema on e-commerce category pages. Unlike product schema on a product page, we omitted links to individual product pages when including the markup on category pages. This is in line with Google’s structured data policy on multiple elements on a page:

“A category page listing several different products (or recipes, videos, or any other type). Each entity should be marked up using the relevant schema.org type, such as schema.org/Product for product category pages. However, if one item is marked, all items should be marked. Also, unless this is a carousel page, the marked items should not link out to separate details pages.”

See the next tip on how to include markup for multiple products on a category page.

6. Use @graph for pages with multiple schema types

Are you using multiple structured data types on a single page? Instead of using numerous <script> tags, use one and place all your structured data types inside of a @graph object.


Before (two <script> tags):

After (using @graph):

7. Use a free online JSON editor

A tool that a colleague recommended to me was JSON Editor Online. You don’t have to sweat it if you don’t have a source code editor downloaded. Just use this tool to make sure your JSON-LD code is valid and correctly indented with tabs, not spaces of course ;) The tool is also quick to tell you when you have errors in your code so that you can fix the error in the tool sooner, rather than later when you validate the structured data using a tool like Google’s Structured Data Testing Tool. Another great thing about using an editor tool such as this one is that it is free of weird formatting issues that can occur in word processors such as Google Docs or Microsoft Word.

Speaking from my own experiences (before I made the switch to use a JSON editor): when creating structured data in document files and then pasting the code to test in Google’s Structured Data Testing tool, the formatting remained intact. As such, I kept getting this error message, “Missing '}' or object member name.” Looking through the JSON-LD, I was unable to locate where the missing “}” would go or any other missing character for that matter. It turns out that copying and pasting code from a doc file with the formatting intact caused my quotation marks to look funny in the testing tool, like italicized or in a different font. Rather than wasting more time by fixing the weird quotation marks, I switched to using a JSON editor when creating structured data. No more wonky formatting issues!

Did this post help in solving any of your problems with structured data on e-commerce sites? What other problems have you encountered? Share your thoughts by commenting below or tweeting @_tammyyu.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In November 2018, Google released an updated PageSpeed Insights API, which provides performance reports from both lab and field data on a page. Using the PageSpeed Insights API, we can test multiple URLs at once to get a better idea of our site’s overall performance and identify areas of our site that can be optimized for speed. To do this, I wrote a script that uses the new PageSpeed Insights API to retrieve performance data and print the results in a Google Sheet—the easiest and quickest way to get an overview of your site’s speed using a sample of pages.

Before you follow optimization recommendations from PageSpeed Insights, it’s important to note that the tool often recommends actions that won’t improve user experience or provide a worthwhile performance increase for your specific site. For example, PageSpeed Insights may advise caching external files (e.g. requests to Facebook.com) or serving images in  WebP format, a file type that is not supported across all browsers. It’s important to consider the nuances of your site and your end users when looking at PageSpeed’s recommendations and determining if you should put them into action—look for the biggest payoffs that won’t negatively impact your site’s UX.

Get the started by making a copy of the worksheet.
Access for free.

How to use the script

The script is configured to test for mobile performance.

  1. First, you’ll want to create a copy of the sheet. Open up the sheet, and click “File” and then “Make a Copy”

  1. Enter a list of URLs you want to test in Column A on the “InputData” tab

  2. Click the “Page Speed” menu item, and then click “Get Page Speed” from the drop-down to execute the script for the list of URLs in Column A

    • The first time you run the script, you will be prompted by Google to authorize the request:

  1. When the script finishes executing (which could take a number of minutes depending on how many URLs you entered), the results will be printed on the tab “PageSpeedResults”

    • If a cell is left blank, that means that item is already optimized on that page.

Response Data

There are 17 columns on the PageSpeedResults tab. The first column contains the URL tested, and the following 16 columns contain the results of the performance tests. The “Performance Score” is an overall rating of your site’s speed performance, with a .1 indicating a slow site and a .9 a fast site. The last column contains a link to the full Pagespeed Lighthouse Report, where you can view all the result data.

What are your favorite metrics to look at when testing page speed? Leave us a comment below or tweet us @distilled.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Google Tag Manager has come a long way since its release in 2012. Throughout the years, the interface terminology has changed, and additional features have been added that make it much easier to use. Tim Allen’s previous post on “Getting to Grips with Google Tag Manager” gives a great introduction for the 2014 features at a time when GTM was more difficult for beginners. But now many features have been condensed and simplified, such as “Tags, Rules and Macros” which is now “Tags, Triggers, and Variables.”

So how do you use Google Tag Manager in 2019? I’ll take you through how to create a tag that will help you track user behavior on your site and go through some of the newest features GTM has now built in for user friendliness and accessibility.

Feel free to skip around the article to start learning about GTM!

  1. How does GTM work?
  2. Setting Up GTM
  3. Tags, Triggers, and Variables
  4. Creating a Tag
  5. Let’s Talk More on Variables
  6. Testing with GTM Preview and Debugging Your Tag
  7. Wrap Up

But before we dive into the details...

What is a Tag Manager?

It’s useful to think of a Tag Management System (TMS) as similar in operation to a Content Management System (CMS). Both can make changes to the entire site via one interface portal, without needing the help of a developer to change the code for every tag.  A developer is only needed for the initial installation of the GTM container code snippet.

The purpose of a tag management system is to manage the implementation of analytics platforms in tracking user interactions via a user-friendly interface.

Many popular analytics platforms have released their own type of tag manager, such as Launch by Adobe Analytics. These various tag managers, much like their analytics platform counterparts, use different terminology but are very similar in functionality. Most TMS solutions are compatible with popular platforms and applications, built to integrate smoothly without additional programming (at least that’s what they advertise).

GTM is compatible with a number of analytics platforms. To view a full list of GTM supported platforms, click here.

How Does Google Tag Manager Work?

GTM inserts JavaScript and HTML tags, created from the user-friendly interface portal, into the GTM container that is hard-coded onto every page of your website.

The GTM container is a piece of JavaScript (and non-Javascript) code that enables the GTM to fire tags.

Setting up GTM

Creating a GTM account for your website is fairly quick.

Follow the steps on the Google Tag Manager site by inputting your information.

After creating your account, you will be brought the main GTM interface. At the top of the workspace, you will see the unique ID number given to your GTM container.

After clicking into the container ID, you will see 2 code snippets:

  1. The first code snippet uses javascript to fire tags, and GTM instructs you to paste this into the <head> of every page of your website.  
  2. The second code snippet is an HTML iframe that is used when javascript is not available and should be placed after the opening <body> tag.

This way, if users have disabled javascript, the tag will still fire from the second code snippet.

You can find more details on setting up your GTM container from the Google Support Site.

Tags, Triggers, and Variables

When you first look into the workspace page of GTM, you’ll see sections on the left hand side labeled Tags, Triggers, and Variables.

These 3 are the building blocks of GTM.  I’ve outlined their definitions below so you can get a better understanding of what each one entails.

  • Tags - Tags are tracking codes and code fragments that tell GTM what action to take on that page.
    • Example: Sending a pageview hit to Google Analytics.
  • Triggers - Triggers specify the conditions under which a Tag should fire.
    • Example: A trigger with a condition to only fire a Tag when a user views URLs containing the path /blog/.
  • Variables - Variables are values used in triggers and tags to filter when a specific tag should fire. GTM provides built-in variables and allows you to create custom user-defined variables.
    • Example: A ‘click’ class variable has a value name (such as a word string) assigned to buttons on the website.

We will go more in depth on how to use each of these in the next sections.

How to Create a Tag

I’ll take you through a simple example of creating a tag for a Pageview across the site.

But first, I should preface the creation of this tag by explaining that you should not create a Pageview tag if you have already installed a Google Analytics container on your site. Creating a Pageview tag in GTM in addition to the GA container will cause a duplication in pageview hits every time a user visits a page, skewing your data. To clarify, you can have a GA account where GTM sends the data to without having the GA container installed on your site.

For the purpose of understanding how GTM works, using a universal concept such as a pageview will help to illustrate the use of tags, triggers, and variables.

To start off, we will navigate to the left hand side of the main interface and click on the Tags section. Then, click New (Tag).

Then we can name the tag and select the tag type as Google Analytics, which is where we will be sending the data from the tag. You can also see below the other tag type options if you are sending data to a different platform.

Next we will configure the tag’s settings. Ensure that the default track type of Page View is selected.

Inputting your Google Analytics Universal tracking ID

This part is crucial to make sure the data gets sent to your GA, so be sure to input the correct info!

There are two ways to do this:

  1. Get the Google Analytics tracking ID by going into Admin > Property Settings > Tracking ID. Click ‘Enable overriding settings in this tag and input the Tracking ID.
  1. Or you can create a custom constant variable, that will always contain your UAID, so you never have to remember it.

This second method leads us further into the concept of variables.

Let’s Talk More on Variables

Assuming you’ve never used your GTM account, setting up the variables in GTM will be important for creating your tag.

When you view the ‘Variables’ window of GTM, you’ll see 2 options: Built-In Variables and User-Defined Variables.

Built-In Variables are variables that GTM can define for you by detecting elements in the code. They include some of the more common variable types such as clicks or pages. Sometimes a website will not have the minimum criteria within the code for GTM to detect the right elements and use its built-in variables; In this case they must be custom made through the User-Defined Variables instead.

I’d recommend adding all of the Click, Form, and History variables to start off.  Click Configure and check the boxes on the left hand side to include them.

View all of Google’s built-in variables, with their definition on the Google Support website. Also another great resource to use is Simo Ahava’s variable guide, where he goes in depth on each built-in variable and ways to utilize them.

User-Defined Variables hold the value that you define for them, whether its numerical, a selection of URLs, or a name string found in an element.

For instance, a GA constant variable used to hold the GA ID associated with your analytics account can be created.  This is very useful when you are creating a tag, so that you won’t have to keep going back to your GA account to input your ID.

You can create a constant GA ID variable by selecting User-Defined Variables New > Variable Configuration > Constant > Value (Input your GA ID) > Save.

Going back to our tag example, you can now choose to input a constant variable..

Make sure that you uncheck ‘Enable overriding settings in this tag’ and use the Google Analytics Settings dropdown to select the variable ID.

Now we can create the trigger that will fire our Pageview tag!

Underneath the tag configuration, click into the Triggering field.  A menu prompts to ‘Choose a trigger’ will appear.  Click on the + sign in the upper right hand corner.

Name the trigger and choose Page View as the trigger type.

Make sure that All Page Views is selected, so that our tag will fire on every page of the site and click Save.

Now that we have both the tag and trigger configuration, click Save. You’ve just created your first tag!

Testing with GTM Preview and Debugging Your Tag

So you’ve created your tag, but how do you know its working?

First click onto the Preview button in the top right hand corner of the workspace.

Next, open your site in a new tab. You will now see at the bottom, much like chrome dev tools, a box appear.

Upon closer inspection, the left hand side shows a summary of the events that first loaded onto the page in sequential order: 1. Message, 2. Message, 3. Pageview, 4. DOM Ready, etc. While the top is labeled Tags, Variables, and Data Layer.

By default, you will be viewing the Tags window, showing you all of the tags on the page, whether they have fired or not.

When you click anything on the page, the Preview box will update with any tags fired, as well as the variables in connection to the elements where the interaction took place.

For instance, when the ‘sign up’ button on the homepage is clicked, we see in the left hand summary that the event gtm.formSubmit loaded.  By clicking into the variables section, we are now able to see the variables and their values that are associated with the ‘sign up’ button.

So what exactly are the variables associated with this button that GTM is showing? They are the variables located in the HTML elements that GTM detects within the code of the signup form.

The same can be seen in chrome dev tools by inspecting the elements on the page. The difference is that GTM makes this easy for you by detecting them, summarizing the HTML variables and their values, and putting it into a user friendly format.

In Chrome Dev Tools:

In GTM Preview:

When you’ve added a tag to GTM, it isn’t live on the site yet.  This is where it’s important to test the tag to ensure its both firing and sending the data to GA.

We can see by just loading the page that the new tag is firing!

If your tag isn’t firing, a useful way to figure out why is by clicking onto the tag in the summary and viewing the firing triggers. If any part of the trigger doesn’t apply, there will be a red X next to the Filter.

Now we can publish the new tag!

First, click the Submit button in the upper right-hand corner of the main interface.

Next, name the version container and description to let others know what you changed, typically the tags name and what it does.

After publishing your tag, you should keep watching the data in GA over time to make sure that the trigger conditions are only capturing the the user interaction we want.

Extension for GTM Tag Testing

Probably the most useful chrome browser extension for GTM is the GTM Debugger.

Once downloaded, hit F12 and the F5 to view the event data and Google Analytics Hits.

Much like preview, testing the tag works here as well, with live event updates.

However, this extension only displays information for tags that are live in the GTM container.

Wrap Up

As you have read, there is a lot to consider when using the power of GTM on you or your client’s website. GTM can be used to create as simple or as complex a tag as needed. However, it’s best to try to keep things as simple and as scalable as possible.

Whether agency or in-house, its best to keep inventory of tags.  This includes creating descriptive and intuitive names for the tags, triggers, and variables. It also allows others to understand what kind of tags the container has live.

The Versions page shows you what container version is live on the site and allows you to click into the different versions to see what tags it contains.

Hope you found this article useful and enjoy creating your tags!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

There is plenty of information and data out there about Google and its dominance as a search engine. But what about if your business is in China? What if you want to optimise for Russia or even that 21% of the USA that doesn't use Google? SEO practices and search engine dominance will inevitably differ across the world in varying degrees, whether it is Google, Yandex or Baidu in pole position. 

This post looks at those search engines that aren't Google, and I will be looking into what makes each search engine unique (if they are unique), why they function differently and what we can take from a more global perspective of SEO.

Source: Statista, 2018

Search in Europe (Google)

The key issue that Europe faces regarding SEO is language. The European Union alone has 23 official languages, and that is not even considering the vast range of dialects, sub-regions and so on. 

Along with these differences in language comes cultural differences in behaviour, which will inevitably affect online behaviour. The Spanish love a siesta, the French enjoy late evenings and the Germans are extremely time conscious - or so the stereotypes suggest. These may be stereotypes; however, they are a simple illustration of how cultural differences inevitably exist.

Therefore, we can not group the entirety of Europe together under the assumption that they will have a similar attitude towards online behaviour and SEO. This article by Search Engine Land takes a more in-depth look into why cultural differences in Europe needs to be a priority when planning your strategy.

Google is clearly dominant in Europe with Google’s search share ranging from 90.67% in Spain to 81.80% in the UK. However, Europe’s approach to SEO very much differs (Statista, 2018). 

Here in the UK, businesses, small and large, are increasingly understanding the importance of SEO and technical optimisation. As this article by Semrush suggests, SEO is still in expansion and adoption phase for the majority of Europe where its value is not yet realised.  Given the complexity of languages and cultures, international SEO thought leaders such as Aleyda Solis have addressed ways in which we can resolve this issue. 

Hreflang will continue to be essential to signal language and cultural differences, but this alone will not solve the complex issue of multilingual sites and content. Read more about Hreflang implementation with Alyda Solis’ article. Yoast also provides a great article about the complexities of multilingual SEO and how to tackle this issue.

Read more about handling different languages and cultures in Europe.

Search in Japan (Google/Yahoo! Japan)

Yahoo! Japan is more popular than Google, however, not as a search engine. Yahoo! Japan is often used for its apps which includes Yahoo Answers, Yahoo Transit, Yahoo Weather etc. When we look at search engine use, Google still dominates with 69.25% market share. According to StatCounter, Yahoo! Japan holds approximately 22% of search engine market share.

So, should this influence SEO? As of 2010, Yahoo! Japan adopted Google’s backend algorithm, therefore, when it comes to technical optimization, treat them the same. Instead, consider your target audience. Are a different audience using Google compared to those using Yahoo! Japan?

On the note of considering your audience, it has been suggested that Japanese users have been found to prefer content heavy sites. Check out RWS’ post for more information about prefered web design in Japan.

We can use the example of Starbucks to illustrate this. Take a look at Starbucks.co.jp page:

Compared to Starbucks.co.uk:

Ultimately, this difference in preference comes down to consumer web psychology. The layout and formatting of Starbucks.co.jp, in the UK, would provide a poor user experience for customers. However, in Japan, the favored western sleek and simple look come across as unreliable - the more there is on a page, the more trustworthy it is. Evidently, this design works in Japan where Starbuck.co.jp receives 1.2 million organic sessions a month (according to Ahrefs, October 2018). 

Web Psychologist, Nathalie Nahai provides some great insights into web psychology and its importance, check out her Whiteboard Friday video to learn more about how user behaviour is reliant on the message your web design is conveying.

Another note that is worth pointing out is that Japan has four different writing styles, yes, four. Essentially, there are four ways that a keyword can be written. If you are attempting to expand into Japan be sure to work with natives as Google translate will only take you so far.

Search in China (Baidu)

China, with 772.98 million internet users, blocked Google as a search engine in 2010. China’s popular search engine is Baidu which adheres to China’s strict online censorships.

Unlike Yahoo! Japan and Google, Baidu and Google are two very separate entities.

So what are the key differences between Baidu and Google?

  • Different weighting for metadata, canonicals, H1s and page titles
  • Meta descriptions and meta keywords are considered as a ranking factor
  • Incorporation of paid ads into search results, unlike Google’s clear(ish) divide
  • Baidu does not understand hreflang
  • Baidu struggles to function with flash or javascript
  • Language optimisation: Baidu favors simple Mandarin where simple characters get priority over complex traditional characters
  • Strict regulations and censorships: hosting your site in China will help you get past The Great Firewall of China

Check out Builtvisible’s post about SEO for Baidu.

...Hang on, Google wants to enter China, again?!...

Yes! Google, well aware of the huge potential China offers, is planning on launching a censored version to adhere to China’s regulations. This will present SEO challenges given The Great Firewall of China. Moreover, this proposed version of Google has raised ethical concerns. Google will essentially blacklist sites and terms that relate to human rights, peaceful protests, religion, differing political opinions, free speech, sex, news and academic studies. 

This censorship will not only be restricted to general search results but will be pushed out to Google images, spell check and suggested search. On one hand, Google will be able to tap into and benefit from a colossal audience. However, on the other hand, in oppressing the ability to search freely, Google is going against its values and mission; “our mission is to organise the world’s information and make it universally accessible and useful” (Google, 2018).

Read more about Google’s expansion plans in China.

Search in Australia (Google)

The US and the UK are Google’s largest English speaking markets, setting the status quo for SEO practices. How does this then affect countries that use English but do not have a strong preference for British English or American English? Let’s look at Australia.

Australians are keen users of Google, so technical SEO does not differ. However, this difference in language use has an impact on the use of keywords for both organic and paid search.

Source: Hitwise

Generally, Australians prefer British spellings over American spellings, but it can not be ignored that the gap in preference is too close for it to be conclusive. Take a look at the share of search of key terms, a classic British vs American battle!

If we look further into demographics, those who have a preference towards American spelling tend to fall into the 18-24 age category. This could be down to the influence of popular culture and social media. Therefore, this variation in spelling preference may pose as an issue particularly for brands that rely on intent. For example, a paid ad featuring the word “pants” could very well target two different audiences, and potentially different intents too.

This not only applies to Australia but also other English speaking countries. Therefore a page, keywords and ads should take intent and demographics into consideration when choosing between British and American spelling subtleties (although, they should choose British because it’s better).

Read more about keyword research:

Search in Russia (Yandex)

Google, with 38.98% share of desktop traffic, falls behind Yandex. Yandex has approximately 67 million active monthly users. Like with Yahoo! Japan and Baidu, the fundamentals of technical SEO are very similar where it values good quality content and penalises the overuse of keywords. However, Yandex does not go without its variations.

In comparison to Google, Yandex:

  • takes time indexing sites
  • allows for longer page titles
  • looks for and favours meta keywords, 4-5 are desirable
  • Uses algorithms that are not to the same standard as Google’s making Yandex easier to optimize for

Although Google is lagging behind for desktop search, when it comes to mobile, Google surpasses Yandex. Before we all start praising the powers of Google, note that this gap is slowly changing. Yandex filed against Google for anti-competitive behaviour as Google was pre-installed on Android phones. A “choice window” was then introduced allowing the user to decide which search engine they want to set as default - this has resulted in Google losing mobile share and Yandex gaining.

So it seems that Yandex has a strong holding in Russia and with its ability to understand Slavic and Turkic languages as well as recognise and read Cyrillic and Latin character means Yandex could expand into Ukraine, Belarus, Kazakhstan, Turkey etc.

Check out these articles for some in-depth Yandex SEO:


It can be very easy to assume that we all use the internet in the same way. That we all search for advice, information, ideas and so on. It is, however, important to note the variations. These are the differences in both search behaviours and search engine dominance. SEO practices and search engine dominance will inevitably differ across the world in varying degrees but whether it is Google, Yandex or Baidu the user is the driving force. Our practices in SEO rely on an understanding of the user’s intent, location, knowledge, behaviours etc. So do ensure you are optimizing for the dominant search engine, but more importantly, ask yourself - are you optimising for the user?

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview