Loading...

Follow Content Harmony’s Content Marketing Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

When we kick off with clients who have been around for more than a few years and already have some decent content on their website, we can usually find opportunities to identify old content that is under-performing, improve the content, and see fast improvements on traffic.

Improving outdated or poorly written content probably made up 15% of the work for this case study and produced 30-40% of the first 6 months of results:

We have learned the hard way that when you’re kicking off a 6-month project with a high monthly budget, nothing makes clients more squirmy than getting 3-5 months in without some of these noticeable wins. And when most of our work involves publishing new content, it often takes 4-12 months for a content asset to (A) start ranking on page 1-2, and (B) move up enough on page 1 to start actually seeing some clicks and not just Search Console impressions.

Naturally, you’re often going to want to be looking for new keyword opportunities that a website hasn’t covered yet – that is covered by a separate Gap Analysis which we’ll cover in a future post.

A Quick Wins analysis is completely focused on improvements we can make to the existing content on the site to get better performance out of what we already have.

“Hey Man, This Is Basic Keyword Research & SEO Stuff”

True!

Doing a quick wins review at the beginning of an SEO or marketing engagement isn’t especially innovative – lots of agencies do this and lots of people have documented similar processes to the one I describe below for quick wins. (Here are 3 solid examples from BuiltVisible, From The Future, and Kevin Gibbons). And most SEOs have seen some version of this analysis over the years with SEMRush data being the classic source.

So the quick wins analysis alone isn’t what I’m focused on here. This tutorial exists to solely focus on the content marketing aspects of the quick wins review (so I’m not going to cover things like easy technical SEO fixes, or fixing broken links to your site).

More importantly – it also focuses more heavily on the outcomes of the quick wins review – how we approach improving content to actually take advantage of the opportunity we just identified. We’re going to go beyond looking at keyword opportunities and look at a methodology for how to effect change on the URLs where we identify potential.

So – with that said, here’s the process we’ve used with success multiple times:

  1. Build a Quick Wins report using link metrics, rankings, and conversion values.
  2. Prioritize opportunity based upon volume or value.
  3. Review content and identify improvements needed.
  4. Improvements: Improve the content itself (Easier).
  5. Improvements: Build new internal links (Easier).
  6. Improvements: Build new external links (Harder).

Let’s jump in:

Step 1: Build The Quick Wins Report:
1) Pull full list of Organic Keywords the site ranks for from Ahrefs.

If the list is really large, sort by keywords where position is <30. If that’s still too large, filter by other factors you care about until the list is small enough to export in 1-5 CSV chunks.

2) Pull full list of Best Pages By Links.

We want to know which pages have links, but, we just need to know Referring Domain count – we don’t care about seeing individual links for this project.

Since Ahrefs has released internal link data in Site Explorer, pull that, too. We’ll save ourselves a site crawl and use that data for internal ranking improvements.

3) Pull Page Value from Google Analytics.

This works a heck of a lot better when the company has a GA account with accurate data (meaning somebody set up Goals with tracked monetary value, or eCommerce tracking with accurate cart values).

If you don’t have GA access, you can use a raw substitute like keyword CPC, but you won’t be able to get data nearly as customized to optimize organic traffic for your *actual* conversion funnel.

These are the two reports you’re looking for.
Step 1B: Merge The Data

Once all 3 of these data sources are pulled, you’ll want to build a spreadsheet with 4 tabs. Leave the first tab blank, add each of the above data sections to the next 3 tabs, and now we’ll start the process of combining the data sources together.

Note: I’m in the habit of doing some of these tasks manually so that I can explore the data in Ahrefs and Google Analytics a bit before exporting it. I’ll often do this during the onboarding process as we’re getting to know the client’s data. But, you can make some of these easier using tools like URL Profiler which can also grab link metrics from Ahrefs, GA values if you have site access, and factors like word count and readability. You’ll still need to fetch rankings from Ahrefs Organic Keywords for the site manually.

Our Example Site For This Tutorial

For the rest of the post, I’m going to use Insteading[dot]com as an example site, because it’s a domain we own and it’s easy to share data for this post if we need to. It’s also gone through some recent traffic fluctuations in 2018 because of thin legacy content, so it’s a good candidate for both content upgrades as well as content pruning, though we won’t focus on pruning in this post.

It’s not quite as good of an example for this post as an eCommerce site since we don’t have any cart transactions to track for page value, but we can use a proxy like the keyword’s CPC to estimate the rough affiliate potential of a keyword. We’ll set page value to be, say, 10% of CPC value to account for that. If you’re doing this type of analysis for a prospective client, or a client you don’t have page data for, this is one way to guess at page value, but I would take any CPC-driven opportunities with a large grain of salt.

So, here’s what our tabs look like:

Raw Ahrefs export for Top Pages by Links

Edited Ahrefs export for Top Organic Keywords.

Note: In the screenshot above for Top Organic Keywords, I have added Protocol/Subdomain/Path1/2/3/4 Columns to the end to make it easier to analyze opportunities by subfolder section of the site. All you do to get this is Paste the URL column at the end of the table, then use Text To Columns under the Data section of the Excel ribbon, and split columns by the forward slash character “/”. If you have parameters used on your site, spit those out by “?” first, name the column parameters and move it left of the raw URL, and then perform the split by “/”.

So now we have a couple of ways to combine this data. One way would be to pull the link data into the organic keywords tab. This is fairly easily done with a VLOOKUP like this:

=VLOOKUP(G2,'ahrefs top pages by links'!C:E,3,FALSE)

We have a few hundred thousand rows so this will take a second. If you’re doing this on a full million row Excel document, your data may take a minute or more. If you start getting into VLOOKUPS that take 10-60 minutes then Excel isn’t the best tool – you should be using an application like Tableau or PowerBI or similar options.

Here’s a snapshot after our VLOOKUP is finished, with Referring Domains in column T, pulled in for each URL in column G. Since URLs will show up in multiple rows ranking for multiple keywords, this is a more brute force method than creating a Pivot Table that joins the two data sources. But, brute force works fine for most smaller sites until you get into the millions of keywords, at which point Excel isn’t your best toolset anyways. Step 2: Prioritize opportunity based upon volume or value.

Now that you have all of your data in a single worksheet, it’s time to figure out the best way to prioritize the data.

We already have a few key data points that we’ll be using to calculate new metrics:

MetricDescription
Keyword VolumeAhrefs estimated volume, adjusted from Adwords data using clickstream data
PositionAhrefs tracked ranking for our URL
Keyword DifficultyAhrefs assessment of average number of links to page ranking on page 1. We’ll compare this against our Referring Domains figure from the Links data we exported.
CPCAdwords Cost Per Click delivered by Ahrefs
TrafficThis is the best number we can get out of Ahrefs to understand CTR for the keyword we’re analyzing, so this will come in handy.

We’re going to create a few new columns for this as follows:

Potential Traffic & Traffic Increase

Potential Traffic = [Traffic] * [Position]

Traffic Increase = [Potential Traffic] – [Traffic]

Potential Traffic is a rough metric. We’re trying to take advantage of the fact that Ahrefs has already factored a CTR curve into their Traffic estimate for our current position. We’re also going to assume that every position offers traffic improvements over the position underneath it. It basically models a CTR curve where position 1 gets double the clicks of position 2, triple the traffic of position 3, etc. But I believe it still retains some of the root data from Ahrefs that downplays traffic estimates on lower CTR keywords.

So in this case, if we’re getting 2,000 estimated traffic at position 2, we’re estimating that we’d see 4,000 clicks at position 1.

If we’re getting 2,000 estimated traffic at position 11, we’re going to estimate that we’d be seeing 20,000 clicks at position 1 for that keyword.

These are inherently rough numbers. I’m not trying to get exact figures here – I’m trying to create a relative potential for the keyword in order to compare it against other potential opportunities we could improve on the site in question. We’re ignoring so many long-tail keywords in an analysis like this, I think that overestimating the potential of a head term by a little bit probably ends up being a wash in the end.

If the fact that I’m using rough data here bugs you, there are a few ways you can get better data if you want to take the time to do so.

  1. One would be to pull CTR data from Ahrefs Keyword Explorer. This is easier on small keyword lists, but doable.
  2. Another could be to build in Google Search Console data that provides closer-to-reality-but-still-not-perfect CTR and Click estimates.
  3. A third option would be to send fancy gifts to the Ahrefs team until they start reporting “Clicks” data from Keyword Explorer through the Organic Keywords export in Site Explorer.
  4. Moz’s Keyword Explorer doesn’t report CTR on domain-wide keyword exports but if they did, they would be a good option here. You’d get less overall keywords to export but you’d still cover most of your big win head terms.
If you want to go grab “Clicked” data in the 2nd box you might be able to estimate the number of users that click on *something*. If you can grab the “Clicks” data then you’ll get an understanding of Clicks-to-Volume ratio which you can use to modify your traffic estimates up or down on a given keyword. Current Value, Potential Value, & Value Increase

Current Value = [Traffic] * [Page Value]

or, without page value: Current Value = [Traffic] * [CPC]

Potential Value = [Potential Traffic] * [Page Value]

or, without page value: Potential Value = [Potential Traffic] * [CPC]

Value Increase = [Potential Value] – [Current Value]

This is an estimate of the rough value of ranking for a keyword based upon our current Page Value from Google Analytics (preferred), or based upon CPC.

Then we calculate the potential value based upon our Potential Traffic value from a minute ago.

If you don’t have Google Analytics data, CPC is one alternative value you can use here, but I’m way less willing to lean on it for decision making than Page Value.

Note: If you still want to estimate *some* value on pages with $0 page value or on keywords with $0.00 CPC, you can add $0.01 or 10 cents to the value to make sure that you’re not missing large traffic increases labeled as zero value.

Here’s what our 3 new columns look like in our master table:

If you see columns with 0 Traffic Increase or $0 Value Increase, that’s because I’ve left data in here for organic keywords that already rank #1.

In the final table above we’re using CPC, so some of these potential value metrics are admittedly silly for the type of publishing/affiliate site we’re running. We’re not going to be able to capture that much value each month.

If we want to make the monthly numbers more realistic, dividing our CPC figures by 10 is a perfectly reasonable way to get a little bit closer to reality. However since the value represents “Monthly Value,” we might be looking at a reasonable estimate of the lifetime value of the page from the potential traffic increases, so that is a reason to leave the numbers as-is. Just make sure you know which is which if you show rough numbers like this to a client or stakeholder.

Yowza. That’s a lot of Excel so far, but, it’s not that bad. I can do the steps above in 5-10 minutes for most sites, it’s just wordy to explain the process in a blog post.

Let’s move on to the fun stuff! Most of the steps I’ve shown you so far are pretty basic. Now it’s time to start analyzing our potential, which is where the process starts to turn into valuable information.

Let’s build our pivot table. Highlight your entire sheet of data and new metrics and put the pivot table in a new Worksheet:

Make sure you’ve highlighted all of your rows and columns of data, but don’t highlight blank cells to the bottom or right or you’re get a bunch of “Not Set” values that need to get filtered from your pivot table.

There are a few different ways we can analyze this data. I’m including the Pivot Table Fields on the right side of the screenshot so you can drag and drop and replicate, and then show you what our table looks like:

A snapshot of our pivot table in progress, with Sort instruction shown in the Right Click menu, and Pivot Table settings show in the Fields section on the right.

Here we have our full set of Site URLs, with secondary data underneath for keywords that each URL ranks for.

From there, we’ve added two columns for Sum of Traffic Increase and Sum of Value Increase. The first effectively estimates total traffic increase potential for the URL (with individual keyword-level increases shown below), and the second does the same thing with our value estimates. We’re also sorting the full table Largest To Smallest by the Value Increase potential. You need to do this on a URL level and again on a keyword level, so it will require sorting twice.

This is the basic core data we need to estimate which pages have the most potential to be increased, but we’re also going to add two columns for Average of Keyword Difficulty and Average of Referring Domains. This will basically tell us if we should expect good results from simply updating the content, or if we’ll need to build some links, too.

Step 3: Review content and identify improvements needed. How To Analyze This Data

You don’t need to read too far into this analysis.

You could simply improve the content quality on each of the top 10-20 URLs in the list at this point and you’d see 70% of the results we’re looking for.

And that’s the first set of pages that you should focus on. But, you can do some sub-analysis on each page like the following:

1 – Use the Keywords List for Targeting Improvements

You can basically look at the top secondary keywords for each URL as a list of keywords that might need to be more closely targeted within the content. At a basic level, do your best to use each variation once.

Hmm – Have I properly mentioned sub-topics like “kitchen”, “dining room”, “rustic”, “harvest”, “oak”, and “DIY” in the post I’m trying to improve? Do I understand the implications of each sub-topic on the overall post, just in case a topic like “harvest table” has some special meaning to my audience that the other keywords don’t? 2 – Add “Number of Keywords” Data To Find New Spin-Off Content Opportunities

On a really popular page ranking for thousands of keywords, there’s a good opportunity to create some new long-tail content. The Farmhouse Tables screenshot above is a decent example – some of those secondary keywords definitely deserve their own new piece of content that is exact match targeted.

Add a new Field for Count of Positions to effectively see how many keywords each page is ranking for, then sort the new column Largest To Smallest:

In this example analyzing our Corrugated Metal post that ranks for 380 keywords, there’s definitely 4-6 good spinoff ideas like “Corrugated Metal Siding Inspiration,” “Corrugated Metal Ceilings,” etc.

In the end, however, the biggest thing you should do is sort this sheet by the pages with the largest traffic increase or value increase potential, and then we’re going to work through our three-part framework for improving rankings on the pages we’ve identified:

How To Act On Your Quick Wins Audit
  1. Improve Your Content – This is the easiest action to take. We’ll go through some methods in the next section.
  2. Building Internal Links – This is the next easiest way to drive more link equity to your newly rebuilt piece of content.
  3. Building External Links – If content improvements and internal link building aren’t enough to push the needle, our last set of opportunities is focused on building more referring domains that link to this particular..
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you’ve kept a close eye on new content in the SEO community over the past year or so, one of the things I think you’ll notice is that search intent is on the minds of some really smart folks in our industry:

(There are definitely others but these are some of my favorites).

Why is search intent suddenly in the SEO zeitgeist?

I believe it’s because Google’s search results have shifted in the past few years. Google has gotten significantly better at producing search results that deliver what the user is looking for without relying solely on classical SEO factors.

In the process, I think we’ve seen search intent become a more dominant factor on multiple types of search results, often outweighing classic SEO ranking factors like links, title tags, and other SEO basics. Historical domain-wide link factors (domain authority, basically) no longer carry the importance that we saw in 2010-2015.

I’m burying the lede a little bit here, but I’m really excited to share that in the spring of 2019, Content Harmony will be releasing a new SaaS toolset that helps SEOs & content marketers produce better content.

One of the great features in that toolset is a new scoring system we’ve developed for classifying search intent.

In this post, I’m going to lay out why we felt this was a key part of the puzzle in producing better content. I’m not satisfied with current methods of classifying search intent, and I’ll walk you the new classification and intent scoring process that we have built and show you some examples.

How We Talk About Search Intent Hasn’t Changed Much Since 2002

There are two common ways of labeling search intent that I’ve seen over the years.

The first classification that most of us have encountered is Navigational, Informational, or Transactional.

This methodology dates back to a 2002 peer-reviewed paper from Andrei Broder at Altavista (thanks Tom Anthony for this post with the Alta Vista paper link and some other thoughts on this system):

Broder’s paper defines each category as follows:

  • Navigational: The immediate intent is to reach a particular site.
  • Informational: The intent is to acquire some information assumed to be present on one or more web pages.
  • Transactional: The intent is to perform some web-mediated activity.

The paper goes on to explain an interesting methodology they used to survey users to understand the intent behind their search.

Fast forward a few years. In the early 2010s Google starts referring to their own variation on this, talking about Know, Go, Do, & Buy “micro moments”. These are a bit more user friendly but basically cover the same categories that we see with Navigational, Informational, and Transactional.

I think these traditional classification systems are useful to help beginners understand how different searches are intended to yield different results, but they’re not very useful to SEOs and content creators on a day-to-day basis.

So where do these systems fall apart?

1) [Navigational / Informational / Transactional] Is Too Broad

As SEOs and content creators, we can produce much better content if we understand what a typical user is looking for. Unfortunately I think the systems that exist now are good for explaining search intent in hypothetical terms, but when you try to start applying them to keyword research and showing content creators and writers how to factor that into their content, I think their usefulness is diminished.

If you’re trying to understand the difference between a format-driven navigational search to reach Youtube, like [snowboard videos] or a branded navigational search, like [seattle metro routes], seeing them both labeled as Navigational isn’t super helpful. And grouping keywords together in this system is too generic to be very useful during the keyword research and mapping process.

2) Navigational / Informational / Transactional Don’t Account For Overlapping Intent

Additionally – I can think of many searches that fall into multiple categories. If somebody searches “amazon laptop deals”, would you label that Navigational (trying to reach Amazon) or Transactional (trying to buy a laptop)? So trying to categorize intent as a distinct category rather than a bunch of overlapping intents is problematic as well.

3) Search Intent Labels Are Often Guessed Manually Based Upon The Query Itself, Not The Actual Search Results

Most of the methods people use during the keyword research process involve adding modifiers that we expect to generate a specific intent. An SEO Strategist reviews a long list of keyword in the hundreds or thousands and fills in a column with Transactional, Informational, or Navigational.

For example, if users add words like “sale“, “cheap“, “deals“, or “for sale” to a query, we mark them as transactional and move on. If we encounter a query where it’s not clear what the intent is from glancing at the keyword, we mark it as Split Intent or with our best guess and then we keep moving down the spreadsheet. Who has the time to actually look at all of these SERPs and see what types of results are actually showing up, right?

(Interesting sidenote on manually assigning intent: that 2002 paper explicitly mentions that “…we need to clarify that there is no assumption here that this intent can be inferred with any certitude from the query.”)

Another approach is to look for a city name or modifier like “near me” to label a query as local – even though many of our non-modified search terms might be showing local packs at the top of the results.

No shame is intended if this sounds familiar – I’ve done it, too – but it’s a process that leads to mistakes and incorrect assumptions.

The experienced SEOs in the room should be able to think of many times when a search yielded a number of unexpected result types as Google seemed to shift the implicit intent they were trying to serve.

It’s also not hard to find shifts in intent that occur from query changes that don’t look significant when you’re looking at them in Excel sheet. Take a look at [texas electric] vs [texas electricity]:

[texas electric] results in a local & branded SERP, since Texas Electric Cooperative is the name of a specific entity.
[texas electricity] results in a more unbranded SERP with PowerToChoose.org from the Public Utility Commission of Texas alongside resellers and commercial sites like comparepower.com and saveonenergy.com.

The problem underlying these examples is that we’re trying to do this classification manually, rather than introducing tools into the process that can do it more reliably.

4) SEOs Don’t Need To Understand Intent The Same Way Search Engines Understand It

We (the SEO community) are not search engines and we are not trying to decide what results a user wants to see, so it’s OK for us to look at intent differently than a classical information retrieval model.

The goal we (Content Harmony) are trying to reach is not to understand the true intent(s) of all users. The goal of our tool is to understand the type of content that Google is looking to serve to users based upon what Google knows about the user’s intent, so that we can help SEOs & content creators make the best content for that keyword that they can. I’ve been really careful throughout this post to discuss “search intent”, not “user intent”, for basically that reason.

As SEOs, we mainly need to understand whether our content (and the format it’s in) fit the format that Google is looking to deliver.

How Content Harmony is Classifying Search Intent

OK, so, money where my mouth is – how exactly do I propose we measure search intent more usefully, and more reliably?

I believe that it’s more useful to classify search intent in a way that more closely aligns with SERP features. Here are the Intent Types we’ve begun using in our software (presented in no particular order):

1) Research Intent

One of the most common result types, this would generally consist of search phrases that generate results like Wikipedia pages, definition boxes, scholarly examples, lots of blog posts or articles, in-depth articles, and other SERP features that suggest users are looking for answers or insights into a topic.

Featured Snippets

Knowledge Carousels

People Also Ask Results
2) Answer Intent

Slightly different from research, there are quite a few searches where users don’t generally care about clicking into a result and researching it – they just want a quick answer. Good examples are definition boxes, answer boxes, calculator boxes, sports scores, and other SERPs that feature a non-featured snippet version of an answer box, as well as a very low click-through rate (CTR) on search results.

Weather results – user wants a clear answer.

Mortgage calculator – a mixed result here but by showing an answer box Google has likely greatly decreased CTR to organic results.

Example of a sports score box, where user likely wants a quick answer to score and whether it’s final or in progress.

3) Transactional Intent

Users looking to buy products or research them are easy to detect, since Google tends to be aggressive with Shopping boxes and other purchase-intent features. Other easy methods of detection would include multiple results from known ecommerce players like Amazon or Walmart, results consisting of /product/ types of URL structure, and multiple results that feature eCommerce category/page schema markup.

Many SEOs forget to consider paid results when looking at search intent, but a prominent shopping box is often a clear indicator of transactional intent.
Multiple prominent eCommerce product/category pages (alongside the shopping results) and product rating rich snippets are good indicators of transactional intent.
4) Local Intent

Dominated by a combination of local packs, geographic markers that have recently started showing up in organic results, Maps intent in the top navigation, and if there’s a way of detecting it, localized organic results for the IP being used to search.

Maps locations in knowledge panels are a decent indicator that we’re looking at a location-specific query.
The classic map pack, particularly at the top of search results, is a strong local intent flag for us.
5) Visual Intent

These are easier to track using Image packs and thumbnails, however prominence is a factor. Many SERPs have Image packs in the top 100 results by default – placement in the top 10 is a more significant sign, as well as 2 rows of results, or perhaps top 10 rankings from sites like Pinterest.

Image packs are a strong indicator of visual search intent.
Pinterest ranking in the #1 and #3 organic results, with a prominent image pack? Yeah… that’s gonna earn a strong Visual Intent score.
6) Video Intent

Originally I was going to classify Video Intent alongside images as a Visual Intent category, but as I started reviewing more and more search results, it became clear that Video was really it’s own type of intent. Between video carousels, video thumbnails, and even video featured snippets now becoming commonplace, video is becoming critical to ranking for certain types of queries.

These video carousels are the most common indicator of video intent that we see.
Video thumbnails are another very strong indicator of video intent.
7) Fresh/News Intent

When we see News Intent in the navigation, Top stories boxes, Recent Tweets, and heavy use of recent dates in the past day/week/month in the organic results, that tells us that there is a high volume of content being produced by this topic, and we can probably infer that Google sees higher user interaction with more recent results for this topic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview