In-depth growth & conversion optimization blog for advanced optimizers. At the head of ConversionXL is Peep Laja, a self-professed conversion optimization junkie. He is also the founder of Markitekt, a data-driven agency that helps ecommerce and SaaS companies grow.
How do CRO professionals run experiments in 2019? We analyzed 28,304 experiments, picked randomly from our Convert.com customers.
This post shares some of our top observations and a few takeaways about:
When CROs choose to stop tests;
Which types of experiments are most popular;
How often personalization is part of the experimentation process;
How many goals CROs set for an experiment;
How costly “learning” from failed experiments can get.
1. One in five CRO experiments is significant, and agencies still get better results.
Only 20% of CRO experiments reach the 95% statistical significance mark. While there might not be anything magical about reaching 95% statistical significance, it’s still an important convention.
You could compare this finding with the one from Econsultancy’s 2018 optimization report in which more than two-thirds of respondents said that they saw a “clear and statistically significant winner” for 30% of their experiments. (Agency respondents, on the other hand, did better, finding clear winners in about 39% of their tests.)
Failing to reach statistical significance may result from two things—hypotheses that don’t pan out or, more troubling, stopping tests early. Almost half (47.2%) of respondents in the CXL 2018 State of Conversion Optimization report confessed to lacking a standard stopping point for A/B tests.
For those experiments that did achieve statistical significance, only 1 in 7.5 showed a lift of more than 10% in the conversion rate.
In-house teams did slightly worse than average: 1 out of every 7.63 experiments (13.1%) achieved a statistically significant conversion rate lift of at least 10%. Back in 2014, when we published an earlier version of our research on CXL, this figure was slightly higher, about 14%.
Agencies did slightly better: 15.84% of their experiments were significant with a lift of at least 10%. This number was much higher (33%) in our earlier research, although the sample size was significantly smaller (just 700 tests). Still, in both studies, agencies did better than in-house CRO teams. This year, they outperformed in-house teams by 21%.
(We didn’t find any significant difference between agencies and in-house customers when comparing their monthly testing volumes.)
2. A/B tests continue to be the most popular experiment.
A/B testing (using DOM manipulation and split URL) is still the go-to test for most optimizers, with A/B tests totaling 97.5% of all experiments on our platform. The average number of variations per A/B test was 2.45.
This trend isn’t new. A/B tests have always dominated. CXL’s test-type analysis over the years also shows this. Back in 2017, CXL’s report found that 90% of tests were A/B tests. In 2018, this figure increased by another 8%, reinforcing A/B testing as the near-universal experiment type.
Certainly, A/B tests are simpler to run; they also deliver results more quickly and work with smaller traffic volumes. Here’s a complete breakdown by test type:
North American optimizers ran 13.6 A/B experiments a month, while those from Western Europe averaged only 7.7. Using benchmarks from the the 2018 CXL report, that puts our customers in the top 30% for testing volume.
There were other cross-Atlantic differences: Western Europe runs more A/B tests with DOM manipulation; the United States and Canada run twice as many split-URL experiences.
3. Optimizers are setting multiple goals.
On average, optimizers set at least four goals (e.g. clicking a certain link, visiting a certain page, a form submit, etc.) for each experiment. This means they set up three secondary goals in addition to the primary conversion rate goal.
Additional “diagnostic” or secondary goals can increase learning from experiments, whether they’re winning or losing efforts. While the primary goal unmistakably declares the “wins,” the secondary metrics shine a light on how an experiment affected the target audience’s behavior. (Optimizely contends that successful experiments often track as many as eight goals to tell the full experiment story.)
We see this as a positive—customers are trying to gain deeper insights into how their changes impact user behavior across their websites.
The 2018 edition of Econsultancy’s optimization report, too, saw many CRO professionals setting multiple goals. In fact, about 90% of in-house respondents and 85% of agency respondents described secondary metrics as either “very important” or “important.” While sales and revenue were primary success metrics, common secondary metrics included things like bounce rate or “Contact Us” form completion rates.
The Econsultancy study also found that high performers (companies that secured an improvement of 6% or more in their primary success metric) were more likely to measure secondary metrics.
4. Personalization is used in less than 1% of experiments.
Personalization isn’t popular yet, despite its potential. Less than 1% of our research sample used personalization as a method for optimization, even though personalization is available at no added cost on all our plans.
Products like Intellimize, which recently closed $8 million in Series A funding, and Dynamic Yield, recently acquired by McDonald’s, are strong indicators of investors’ and corporate America’s big bet on personalization.
But as far as the CRO stack goes, personalization is still a tiny minority. A quick look at data from BuiltWith—across 362,367 websites using A/B testing and personalization tools—reinforces our findings:
We did find that U.S.-based users are using personalization six times more often than those from Western Europe. (Additionally, some 70% of those on our waitlist for an account-based marketing tool are from the United States, despite the product being GDPR-compliant.)
Personalization in the European market and elsewhere may rise as more intelligent A.I. optimization improves auto-segmentation in privacy-friendly ways.
Back in 2017, when Econsultancy surveyed CRO professionals, it found personalization to be the CRO method “least used but most planned.” Some 81% of respondents found implementing website personalization to be “very” or “quite” difficult. As several reports mentioned, the biggest difficulty for implementation of personalization was data sourcing.
Our findings on personalization diverged with a few other reports from the CRO space. Econsultancy’s survey of CRO executives (in-house and agency) reported that about 42% of in-house respondents used website personalization, as did 66% of agency respondents. Dynamic Yield’s 2019 Maturity Report reported that 44% of companies were using “basic” on-site personalization with “limited segmentation.”
When CXL surveyed CRO professionals for its 2017 report, 55% of respondents reported that they used some form of website personalization. In the latest CXL report, respondents scored personalization a 3.4 on a 1–5 scale regarding its “usefulness” as a CRO method.
5. Learnings from experiments without lifts aren’t free.
In our sample, “winning” experiments—defined as all statistically significant experiments that increased the conversion rate—produced an average conversion rate lift of 61%.
Experiments with no wins—just learnings—can negatively impact the conversion rate. Those experiments, on average, caused a 26% decrease in the conversion rate.
We all love to say that there’s no losing, only “learning,” but it’s important to acknowledge that even learnings from non-winning experiments come at a cost.
With roughly 2.45 variations per experiment, every experiment has around an 85% chance of decreasing the conversion rate during the testing period (by around 10% of the existing conversion rate).
Businesses need to archive and learn from all their experiments. According to the CXL report, about 80% of companies archive their results, and 36.6% use specific archiving tools. These are strong indicators that CROs are refueling their experimentation programs with learnings from past efforts.
But while tracking results and documenting learnings can improve a testing program in the long run, there’s urgency to learn from failed experiments and implement successes quickly.
There’s also a need to research and plan test ideas well so that experiments have a higher likelihood of success. The ResearchXL model is a great way to come up with data-backed test ideas that are more likely to win.
While our research helped us establish some industry benchmarks, a few of our findings hardly surprised us (for example, the popularity of A/B tests).
But what did surprise us was that so few customers use personalization. We expected more businesses to be progressive on that front since the feature is available in all our plans and doesn’t require massive traffic volumes. As noted earlier, better data management may make personalization easier for companies to execute.
Other than that, we view the setup of multiple goals as a positive sign—testers want to dig deeper into how their experiments perform to maximize learnings and, of course, wins.
Why is it that some books become bestsellers and others can hardly sell a 100 copies? Why do you read some books with passion and interest but can’t get past the first 10 pages of others? What’s the difference?
It’s simple: word choice. The words you use—and the order in which you use them—make all the difference when it comes to crafting sales copy that wins sales. It doesn’t matter if it’s books or websites, but words do matter, so pick yours carefully.
As Mark Twain said, “The difference between the right word and the almost-right word is the difference between lightning and a lightning bug.”
Here are seven principles of effective sales copywriting.
1. Know who you’re talking to.
Look at the three pictures below. A skater dude, a busy mom, and a backpacker. If you’re writing sales copy for a product, you should always talk to a specific person.
You should talk differently to each of the people below—no brainer, right? Still, most people try to write copy that works for everybody. Try to figure out the common denominator among all the potential buyers.
Create a customer persona. Describe this person. Give them a name. Imagine what this person is like, how they spends their days, and what their key issues are. Your sales copy will be much better if you write it with a specific person in mind.
If you need some more help with the process of creating your persona, check out these articles:
Don’t forget you’re dealing with people. Even if you sell B2B products, there’s always a person with a name and an identity reading your copy and making decisions.
If you know this, then why are you writing business jargon? Forget buzzwords (“social media management system”) and nonsense that doesn’t mean anything (“flexible solutions”). Say it as it is.
Use the “friend test.” Read your copy, and if you spot a sentence you wouldn’t use in a conversation with your friend, change it.
Human relationships are about communicating. Business jargon should be banished in favour of simple English. Simplicity is a sign of truth and a criterion of beauty. Complexity can be a way of hiding the truth.
– Helena Rubinstein, CEO, www.labgroup.com
3. Work hard to create a compelling headline.
People don’t read; they skim. The main thing they do read is the headline, so make it good. If the headline doesn’t capture their attention and make them interested to read further, the rest of the copy doesn’t matter.
On the average, five times as many people read the headline as read the body copy. When you have written your headline, you have spent eighty cents out of your dollar.
– David Ogilvy, ad guru
Questions to think about while coming up with a great headline:
What does your prospect care about the most?
What’s their biggest problem?
What’s their biggest goal or dream?
How can you help them achieve it or solve it?
The best headlines communicate a direct benefit. It’s hard to know off-the-bat which headline will work the best. Test them.
4. Don’t make them think.
Thinking is hard. Most people don’t want to do it.
They look at your copy and want to understand what you’re offering. If it’s not obvious in the first few seconds, they’ll move on.
Your main headline might be benefit-oriented, but, underneath it, describe in 2–3 lines:
5. AVOID ALL CAPS AND DON’T USE EXCLAMATION MARKS!!!
There are no good reasons to put your text in all-capital letters. Putting a lot of words in all caps or bold slows down reading, comprehension, and interest.
Lower-case letters have more shape differences than capital letters. Text in lower case is recognized faster than all caps.
Also, using more than one exclamation mark in a row just shows that you’re 12 years old. Nobody wants your stuff more because you add exclamation marks. Au contraire.
6. Readability matters.
If you want people to read your text, make it readable. The most interesting copy in the world will go unread if the readability is poor.
Key things to improve readability:
Font size: minimum 14px, preferably 16px;
Line height: 24px;
New paragraph every 3–4 lines (empty line between paragraphs);
Use sub-headlines as much as you can (at least after every two or three paragraphs);
Use images to break text apart. People read more if patterns are broken.
Line width: max 600px. If your lines are too long, people won’t read them.
Use dark text on a light background (ideally black text on white background).
7. Sales copy should be as long as necessary.
Tests have shown that 79% of people don’t read. However, 16% read everything. Those 16% are your target group—the most interested people.
If people aren’t interested in what you are selling, it doesn’t matter how long or short your sales copy is. If they are interested, give them as much information as possible. A study by the International Data Corporation (IDC) showed that 50% of uncompleted purchases were due to lack of information.
Your readers can always skip parts of your sales copy and click “Buy” once they have the information they need. But if they read through the whole thing and they’re still not convinced or have questions, you have a problem.
Great sales copy is essential—and elusive. The best copy ditches the corporate jargon and speaks directly to customers.
If you can remember these seven principles of sales copywriting, you’ll be way ahead of most (and have the sales numbers to prove it):
Lurking beneath every goal are dangerous assumptions. The longer those assumptions remain unexamined, the greater the risk.
– Jake Knapp, Sprint: How To Solve Big Problems and Test New Ideas in Just 5 days
Imagine this scenario. You’re a marketer, and you’ve just launched a marketing campaign that you spent weeks or months building. You checked all your boxes:
You assigned roles and responsibilities.
You kept stakeholders informed along the way.
You activated all the right channels to reach your target segment.
But something is wrong. Hardly any prospects are opening your emails. Almost none are engaging with your ads. The only feedback you are getting is that certain elements on your landing page are broken and, worse, don’t load properly across devices and browsers.
Your boss calls you into their office and asks: “What happened?”
The wrong answers would be:
“I just assumed prospects would open my emails.”
“I assumed the team QA’ed the landing page.”
Instead, the right answer is: “I’m going to find out where my assumptions led me wrong.”
In this post, I’ll walk you through a rigorous project management process to help you optimize your campaign strategy. Taking lessons from agile project management (specifically: sprints), I’ll show you how to build more effective, less assumptive, marketing campaigns.
Brands like 23andme and Slack have adopted the Google Venture design sprint because it works. For businesses aligned with the whole “lean startup” movement, the sprint offers a formula to quickly build, launch, and test products before committing too much time and effort to something that might not resonate in the market.
It can be easily applied to marketing because it’s built around making data-driven, research-backed decisions, which are critical to creating winning campaigns.
And even though Jake Knapp explicitly advises not to adapt the Sprint—I’ve done it anyway. Over the years, I’ve made small tweaks to suit the specific goals and needs of a marketing team and marketing campaigns.
Sprints traditionally happen during a five-day timeframe, when product teams set aside everything else they’re working on.
In my world, I don’t do that. Sometimes, our team will spend one week on a marketing sprint; other times it might take six. That’s the nature of shipping marketing campaigns—the sprint is a framework, not a mandate, for guiding our work.
Map. Set your targets and objectives for what you want to accomplish based on feedback and research.
Sketch. Ideate and pitch ideas for achieving your marketing goal.
Decide. Vote and decide on your campaign content, channels, and tactics based on ideas pitched in Phase 2.
Prototype. Build just enough of the campaign to get it ready for testing.
Test. Get feedback on every aspect of the campaign so you can go back, make changes, and launch.
Now let’s look at what’s done in each phase to achieve those goals.
Phase 1: Map
The Map phase is all about research, collecting data, and understanding the problem to set clear, measurable, marketing targets.
The first thing you need to know is your objective. Is this a lead-gen campaign or a campaign to push a new affiliate program? Is it a nurture track to convert leads into paying customers or a brand play to increase awareness?
Once you understand the goal, you can move on to the research—how you’ll achieve it and the specific targets and metrics that indicate “success.”
There are so many ways you can collect data and do research. (In fact, CXL Agency has their own rigorous research process.) As a guide, I’d recommend a mix of primary and secondary research to inform how you set your targets, such as:
For example, for a recent campaign to launch new pricing at CXL Institute, we conducted a series of industry interviews with pricing experts, ran an average revenue per user (ARPU) projection analysis for new plans, and went through a suite of usability tests on pricing mockup designs and copy.
One piece of feedback from usability tests was that users were confused by the “Pay once” option in our pricing. Users didn’t understand if it was an annual payment or if they would keep the product for life.
The feedback triggered us to add a small disclaimer to our pricing block that made it clear that they would keep the course forever:
Phase 2: Sketch
Once you’ve collected all your research and set your targets, you’re ready to jump into the Sketch phase. This is where you put all your creative marketing ideas to work.
The first portion of this phase analyzes the research and comes prepped to an “ideation” pitch meeting with a fully baked campaign plan. Channels, content, messaging—it should all be there (or at least a skeleton of it).
A fully baked campaign prevents the meeting from turning into a mishmash of half-baked ideas that sound cool but might not make sense for the project. Often, asking for a full campaign plan leads team members to think of more complex, interesting ways of solving the problem.
It also helps marketers think more about connecting the dots across channels and assets since they’ve had to plan for it upfront. Here’s an example of what a campaign ideation pitch looks like, from a campaign I ran in my previous role at Unbounce:
Driver. The person(s) responsible for leading the project and corralling all stakeholders.
Approver. The person who ultimately makes the decision.
Contributor. The person(s) with subject-matter expertise.
Informed. The person(s) who’ll be kept in the loop on how decisions are made.
In this phase, the DACI framework is especially handy because you need one person—the Approver—to decide how all the voted ideas come together into a plan.
The Approver then comes back to the team and presents the plan to move forward, assigns roles to the Drivers, and pushes the campaign into the next phase.
Phase 4: Prototype
If you remember one thing about the Prototype phase, it should be this: Build just enough. Knapp outlines a four-step list of what he calls the “Prototype Mindset” in his book, and it goes as follows:
You can prototype anything.
Prototypes are disposable.
Build just enough to learn, but not more.
The prototype must appear real.
The prototype mindset from Sprint: How to solve big problems and test new ideas in just 5 days.
As a (self-aware) perfectionist, I can’t tell you how many times I’ve been in the Prototype phase and wanted to just spend a little extra time polishing a landing page, ad, or email. Resist the urge.
Prototype of a fake ad for launching Unbounce popups, created with stock imagery and quickly mocked up in Photoshop.
The whole point of this phase is to build only what you need to get an authentic answer from a potential user in the next phase: Test.
Your aim is to move through the Prototype phase quickly so that you can actually learn (and improve) based on real feedback. Plus, the more time you waste making something perfect, the more frustrating it’s going to be if when you have to change it later.
Phase 5: Test
Congrats! You’ve made it to the final phase—where the real magic happens. During the test phase, you get user feedback on the prototypes you’ve built.
First, conduct a series of interviews (ideally with your customers). According to Knapp, conducting at least five interviews during a sprint is enough to get real insight. Any less and you might be operating on false information.
Real screenshot of prototype testing.
Ask all interviewees the same questions. You’re looking to discover:
Are they interacting with the prototype the way you intend? For example, if you want them to hover over a tool-tip on your landing page to discover more info, are they doing that?
Is their reaction positive or negative? For example, is your messaging resonating with them? If you added a joke to your email copy, did they get it? Did they laugh?
Are they motivated to complete the action? For example, are they finding and clicking the call to action? Is the offer something they seem enticed by?
After you conduct interviews, transform feedback into “How might we” statements. Originally an idea defined by Proctor and Gamble in the 1970s, the basis of “How might we” is to rephrase every piece of feedback (positive, negative, neutral) into a question that incites action.
For example, say you’re testing an email in a nurture campaign to convert leads into customers. A piece of feedback you might receive is: “Get to the point faster, I skim emails.”
Your role is to transform that feedback into a question: “How might we accommodate people who skim emails?”
The benefit of this technique is that it doesn’t immediately present a solution, empowering you and your team to come up with the best answer. For example, you could solve for skim readers in a few ways:
Reduce the amount of copy in the email.
Use bolding and bulleting to break it up and call attention to the main points.
Once you’ve transformed your feedback into action items, you need to prioritize. Often, you’ll get a ton of feedback, and you need to decide which feedback to put into action. Sometimes, you might not have enough time to do it all, and that’s okay.
Prioritizing feedback should be based on:
How important it is to the campaign’s success. If something’s broken, you need to fix it.
How often that piece of feedback came up. If everyone said they didn’t understand the headline, you probably need to rewrite it.
An example feedback prioritization sheet from an Unbounce campaign.
From there, you’re armed and ready with a tested campaign that you can remix, fix, and—most importantly—launch!
Sprints are an effective and helpful project management process that you can apply to any and every marketing campaign. They ensure your work is data-driven and research-backed.
Ideally, sprints aren’t a one-and-done experience, either. A sprint lets you observe a campaign in the wild and, if it’s not hitting your targets, make tweaks and changes until it does.
If you want to learn more project management tools, techniques, and processes, check out my course at CXL Institute on project management for marketers, launching August 5. I’ll be covering the sprint process further, as well as walking you through how to iterate from annual to quarterly, monthly, and weekly planning so that your marketing team is set up for success.
“Getting great results” and “creating great reports” are very different skill sets. If you’re like most marketers, you’d rather sharpen your subject-matter expertise than spend time in PowerPoint.
The result is that reporting becomes an afterthought rather than an opportunity—a “necessary evil” with imperfect solutions:
Manual reporting is too time-consuming, but it’s been the only way to report on the right platforms with the right analysis.
Automated dashboard reports save time but bring limited functionality and don’t help clients understand the story behind the scorecards.
Fortunately, Google Data Studio can automate the time-intensive tasks of data compilation and report building without sacrificing important context and insights.
While Data Studio gives you an ideal platform for report creation, there’s a final step to transform data into a story that drives your clients to decision and action (such as pivoting strategy, approving new resources, or simply choosing to retain your services). That step is not so easily automated.
So before you start building your Data Studio report, make sure you know what to include—and what to leave out—to create a compelling client report.
Clients need stories, not just data
In data-driven industries, it’s easy to imagine that we can “let the data decide,” but that’s actually not the function of data. It’s our job to help our clients interpret the data so they can approve recommendations and take action.
While dashboards and data snapshots bring value to marketers and analysts, they’re usually insufficient for clients. A Deloitte Canada study revealed that 82% of CMOs surveyed felt unqualified to interpret consumer analytics data.
People who are receiving the summarized snapshot top-lined have zero capacity to understand the complexity, will never actually do analysis and hence are in no position to know what to do with the summarized snapshot they see.
To build useful reports, we need to move beyond simply summarizing performance with quick charts. We need to help clients understand the story.
The benefits of data storytelling
If “storytelling with data” sounds both vague and intimidating, you’re not alone. Storytelling evokes ideas of creativity and even fiction, a sharp contrast to the left-brain data and analysis tools we’re accustomed to using.
Telling a story in a report doesn’t require a cast of characters, anecdotes, or plotlines. Essentially, you need to follow the same UX advice you’ve been giving your clients for decades: don’t make them think.
Your readers need more than features (facts and figures) to take action. Story provides context so that they understand where to focus their attention. Storytelling also heightens emotions, which is vital because decision-making is driven by emotions, not logic.
Make your data storytelling emotional
The words emotion and motivation are derived from similar Latin roots. The more your clients can feel something, the more motivated they’ll be to act.
Marketers may be tempted to highlight wins and gloss over losses in reports to nudge their clients to feel joy (or at least satisfaction). But this strategy can backfire.
Your clients need to know about what’s not going well—even more than they need to know about what’s working. Due to what’s known as attentional bias, we’re wired to pay attention to perceived risk, and generally to ignore status-quo.
People also respond differently to winning and losing, and losses are twice as powerful compared to equivalent gains. When your clients can see and experience a loss, you place them in a highly motivated state to take necessary action and, if necessary, change course.
To illustrate, let’s say you were responsible for driving 4,000 net new email subscribers each month. You’re hitting the goal, but the steady increase in list size isn’t growing revenue—a fact that’s been overlooked and gone unreported.
By drawing attention to the discrepancy with a visualization, you can drive a discussion that wouldn’t be possible if you focused only on list growth.
With this new (alarming) information, you can revisit targets, value per subscriber, or changes needed for lead nurturing and sales.
With client work, there’s always a temptation to default to “everything’s sunny all the time” reporting. But those reports do a disservice to the client and the agency, even if they are more comfortable to deliver.
Transparency about actual market conditions, threats, and challenges are catalysts for real improvement. If not examined, nothing changes.
3 key elements of data stories
Analytics evangelist Brent Dykes says that storytelling with data needs three elements to drive change: data, visuals, and narrative.
When all three elements work together in your report, you reduce the cognitive load placed on clients, helping them easily identify and process the story. Reports that showcase only raw data are insufficient but are still used surprisingly often.
Adding charts and graphs can help with comprehension, especially when they employ good design principles. Our brains process high-contrast images subconsciously (before we can make sense of the data). These visual properties, known as preattentive attributes, include:
When you apply preattentive attributes to chart creation, you help your reader find the story more clearly and quickly. Notice the impact of adjusting weight and color in these Data Studio line charts:
Narrative is the final key element for story, but it’s often missing from reports—making it difficult for clients to understand and engage with the data. Narrative provides context for your readers; it’s the answer to the question, “What am I looking at?”
Journalists begin their stories with fast facts: who, what, where, when, and why. This style, known as the inverted pyramid structure, puts the most important information first and gives supporting details further in the story.
Readers are accustomed to this style, and they assume that the earliest information is the most important.
On a macro level, the report should begin with account performance before diving into supporting details.
On a micro level, each section or chart should lead with priority metrics, or KPIs, followed by secondary metrics.
Many reporting tools lead with secondary metrics, which measure “what must not be broken” (instead of “what needs to be fixed”). That focus can encourage clients to overweight metrics that you shouldn’t optimize. Always start with the big picture.
What your reports should contain
Before we explore what specific information “clients” need from report deliverables, we have to address the fact that businesses, roles, and people aren’t all the same. A small business owner has different priorities than a CMO. Some clients want to see all the data, others just want the phone to ring.
Keep your specific client in mind as you build out your report, because details that satiate one person can overwhelm another. That said, there are three universal guidelines that will make all your reporting better, no matter the end user.
Your report should include:
1. What happened
Your clients hired you to solve problems, so your report should address those problems, and the progress made in solving them. This starts with basic benchmarks:
What are the KPIs, and were targets met?
How are we performing compared to previous periods?
As mentioned above, it’s not the job of a report to showcase only the wins. Be consistent with your key metrics (don’t cherry pick flattering stats), and make it as easy as possible for readers to interpret the data you’re sharing with them.
2. Why it happened
Once your clients know what happened, they’ll want to know why. Sometimes, changes are due simply to natural variance, but you’ll want to document causal factors:
Internal changes. Note if there were changes to marketing efforts (whether on- or offline), page content, site speed, availability of inventory, offers or promotions, or pricing. Also document if tracking changed or went down.
Your team’s involvement. Show progress made on tasks, including implementation and production. Note how your team helped accomplish wins or mitigate losses.
Just as it can be hard for novices to tease out benefits and outcomes in product copy, it can be challenging to write recommendations in reporting (e.g. “Your tracking is broken. So as a next step…we recommend you fix it.”)
The purpose of next steps isn’t necessarily to introduce groundbreaking ideas or plans but to create a clear path forward. What may feel redundant or obvious to you can provide needed specificity to your client that increases the likelihood they’ll take action. If performance isn’t meeting expectations, it’s especially important to provide recommendations that address the shortcoming.
When writing next steps, use the active voice and assign responsibility wherever possible. “This discovery should be investigated further” does not help your clients know what to do, or who should do it. “Client to provide updated content roadmap by August 15” does.
Now that you know how to tell a story and what to include in your report, it’s time to create it in Google Data Studio.
Create your report in Google Data Studio
1. Start a new report > Choose a template
After logging in to Data Studio with a Google account, your first step will be to create a new report.
As a reminder, don’t be fooled by the apparent convenience of templates; Even the best ones are still tools, not client deliverables. You’ll need to spend time strategically customizing whichever template you choose to transform it from a one-size-fits-all dashboard into a report that provides value for your client.
2. Connect your data sources
Data Studio makes it easy to connect directly to your data source(s). You can currently select from 18 Google connectors built and supported by Data Studio, such as Google Analytics, Google Ads, and YouTube Analytics. You can also upload your data via CSV or access it through Google Sheets or BigQuery.
Here’s a quick walk through of how to add a data source:
If you run into limitations accessing data sets or fields through Google products, you can choose from 141 partner connectors and 22 open-source connectors, with more connections being regularly added.
Because you can connect to multiple sources in a single report, you don’t need to prepare or curate your data sources before connecting. Individual charts in Data Studio each use a single data source by default, but you can use shared values (join keys) to create blended data of up to four other sources.
Once your data sources are connected, you can begin formatting the presentation of your report.
3. Create impactful visualizations
Visualizations can increase your reader’s understanding of the data on both conscious and subconscious levels. The more clarity your charts provide, the easier the story is to interpret.
Choose the best chart for your data
Adding a chart in Data Studio is an easy dropdown. Selecting the right chart visualization takes some thought.
Be sure that each chart adds meaning to your report; don’t compare metrics or create segments just because you can. Your clients will seek patterns and meaning even where they don’t exist, and it’s far easier to omit useless charts than to explain why a perceived trend is actually just noise.
That said, you can create multiple charts to increase comprehension. By grouping distinct charts (rather than relying on viewer-enabled date-range or data controls), you clarify relationships, composition, and trends without requiring clients to conduct discovery and draw their own conclusions.
Following the inverted pyramid guidelines, you can show high-level, aggregate performance with one chart, and break out performance with another. Or, use side-by-side time series charts to “zoom in” on recent performance and “zoom out” on trends over time, giving your client at-a-glance context.
If you’re unsure of which visualization to use, chart selection tools (like the one below from chartlr) can help you choose the best chart types for your data and objectives.
Enhance charts with preattentive attributes
When charts are busy or cluttered, add contrast to clarify the story. Edward Tufte’s Data-Ink ratio suggests minimizing the amount of non-essential “ink” used in data presentation.
Data Studio makes it easy to adjust color, weight, and scale (as well as grids and axes) to create contrast and emphasis. This is handled in Data Studio’s Style panel, where each metric series is individually controlled:
Data Studio also has some built-in visualizations to help with quick data interpretation, such as the red and green time comparison arrows found in scorecards and tables.
Be sure to review whether the colors correlate with positive or negative change for the metric. “Green is up” works great for site visits, but a CPC increase with a green arrow is confusing for readers. You can override the default settings in the Style panel.
Help your readers make sense of data and visualizations by clarifying relationships and including background information they may not immediately know or remember.
Establish hierarchy and organization
As with any document, a consistent layout and hierarchy in your report orients your readers. Data Studio is not a word processor, and it doesn’t apply style sheets or standardized formatting the same way.
You can control layout and theme properties to provide a consistent look and feel. As you create (or duplicate) pages on the fly, pay attention to heading areas, positioning, text, and font size.
You can apply elements to all pages by selecting them and making them report-level.
Group thematically similar charts and data together. It helps to have one main idea per page (or slide) to reduce the amount of information your reader processes at one time.
Leverage headings and microcopy
Headings are good; better headings are better. Again, your goal with headings is more to orient your reader than to repeat what they’re about to read.
Microcopy gives your readers additional context about a page element, and it’s extremely easy to add to your Data Studio report.
You can use microcopy to spell out acronyms, provide definitions and annotations, cite targets and objectives, or otherwise reduce friction for your clients as they work to understand the data.
In this Data Studio template screenshot, the heading, microcopy, and metric labels all repeat each other. This is fine for a template but would add little value for clients.
With just a bit of customization, each text element serves a purpose. (Note that the metrics have also been re-ordered to lead with KPIs.)
Add context with chart deep dives
Context and text are not synonyms; context doesn’t have to be lengthy sentences—and doesn’t even need to be words at all.
Best practices are starting points: If you have no data, start with these. They are not what you should end up with, but they’re often where optimization begins. That’s an important distinction.
This post applies Jakob Nielson’s 10 Usability Heuristics to B2B websites that focus on lead generation (as well as “high consideration” B2C sites that lack any transactional functionality).
Usability heuristics are “best practices” for user interface design. When applied to your site, these tenets help reduce friction and keep buyers focused on your message—rather than distracting or confusing them with a deficient or incomplete interface.
B2B websites often have to explain a lot to get buyers to convert. The higher the value of what you’re selling, the higher the inherent friction; therefore, the more questions you’ll need to answer throughout your site.
In these situations, information architecture is much more complex. Some B2B firms are notorious for ignoring this reality because, they argue, “we don’t actually sell anything in our website.” That argument rarely holds up.
When redesigning a website, especially if it’s a radical redesign, these heuristics are a north star—reliable criteria to decide between alternative page designs, functions, or ways to layer your information. For each of Nielsen’s 10 heuristics, we provide commentary and examples for B2B site designs.
1. Make your system status highly visible
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Translation for B2B websites: Always tell your buyers where they are when navigating your site. You can achieve this through the use of:
Breadcrumbs.Breadcrumb navigation works like a GPS, telling your buyers where they are on your website at all times. Plus, your buyers have a path laid out that tells them how they got there. Use breadcrumb navigation on your site, whether that’s location-, attribution-, or path-based.
Page headers. The page header should resemble the copy of the navigation items or links. This is a good practice not only for SEO but also user experience. If the page header matches what the user clicked, the buyer will be reassured.
Highlighting selected menu options. When you click on a navigation item, keep it highlighted, bolded, or underlined, so that your buyer gets instant feedback about menu options.
Show progress bars. Include page-load indicators during page load. If buyers are trying to load a calculator widget or process a request, then a progress bar or notification of some sort let’s them know what’s happening.
Thank you pages. Thank you pages are great indicators of current status. If your buyer downloads an ebook or signs up for a webinar, the Thank You page confirms the action that was just taken.
If you skip these elements, your site will confuse buyers, who will wonder where they are—a completely unnecessary friction point. Make your navigation clear.
The Berkshire company does it right, providing their buyers feedback about exactly where they are while navigating their site, and the breadcrumbs tell them how they got there:
In the case of MSC Industrial Supply Company, buyers who want to view their specialty brochures see a rotating progress wheel that indicates what percentage of the brochure is loaded:
Above all, remember that good websites must answer buyers’ questions before they think to ask them. If buyers get distracted trying to navigate your website or are left wondering if something is happening after a click, they’ll get frustrated.
Take this homepage as an example:
All good there, right?
Well, now look at this interior page and try to guess which section of the site was clicked, or if you can tell where in the site you are:
Nope. You have no idea where you are.
Here’s an example of how a thank you page can act as a system status confirmation:
2. Match your system to the real world
The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Translation for B2B websites:Use phrases and words that your buyers are already thinking about. Eliminate jargon. Your buyers must come across words, phrases, and concepts that are familiar to them.
This was intentional and based on research. In sharp contrast, look at the language from their homepage before their website redesign:
Their clients didn’t use terms such as:
“Our resources, products and proven strategies”;
“attain the most of”;
“…our employees have been expertly applying their wealth of experience to our clients”;
“a diverse set of client experiences…”
You get my point.
3. Allow user control and freedom
Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Translation for B2B websites:Eliminate anything that takes control out of the user’s hands. Here are three examples in which this heuristic is commonly violated in web design:
We’ve all visited websites where a pop-up window suddenly appeared and asked us to join an email list or take a survey. While intrusive pop-up windows like these are annoying (pop-ups do work when done right), they’re worse if your buyer can’t reject them.
If you want to collect feedback, give users 100% control. Let them reject your offer. This will actually increase the quality of your surveys: Those who opt in are more likely to be honest and genuine.
Grainger is an industrial supply company. On their website, buyers clearly see the helpful “No Thanks” microcopy, right underneath the big, red “Join” call-to-action button. Your buyers will appreciate this thoughtful feature because it adds to their user experience.
On the Design&Function blog, a slide-in call to action shows up only after users begin scrolling. The ebook offer is also collapsible, so the buyer can reduce its visibility or come back to it later:
Another pet peeve for buyers is a website that autoplays a video. This can be a nuisance, especially if it defaults to sound On. Don’t assume what you’re site users will want to do—let them decide when (if ever) to play your video. Video content should still be supplemental to text information.
Here’s how Square does it:
Another example of loss of control that causes anxiety and frustration are automatic sliding banners. In addition to causing frustration, CXL research has demonstrated that automatic carousels don’t work.
Instead of using this distracting element, layer information in a way that makes it easy for buyers to discover and explore with full control over their experience.
Here’s an example of a better way. WSI franchise uses tabs to walk the user through related content:
4. Use consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Translation for B2B websites: The last thing you should subject your buyers to is a sense of confusion. They shouldn’t wonder if words, situations, or actions really mean the same thing. Websites are not puzzles. Create fluid experiences that eliminate guesswork.
Massey Ferguson’s website, a leader in tractors and global harvesting, exemplifies consistency. On every page, whether the homepage or a product page, buyers see the same use of white space, a clean layout, and a well-organized information hierarchy.
This keeps buyers calm and makes it easy to scan the site quickly for important information. Consistency and conventions make your website “learnable,” and that’s a good thing—it will appear easier to use.
Another example: Throughout the Sprint Business site, you see the same elements in the navigation bar that make it easy for buyers to know where they are. In addition, the same drop-down menu appears in the layered navigation from every page accessed via navigation bar.
Compare that to Georgia-Pacific. They’re a huge corporation, yet the experience on related brands is different—navigation styles and standards change. This isn’t a unified experience and may cause confusion for some B2B buyers.
Now, one could argue that because the company is so large and the markets so varied, this is a lesser issue than if it occurred on the same site (or with the same pool of buyers). In the worst-case scenario, these designs would:
Force buyers to adapt to different interfaces on a different section or microsite;
Cause some buyers to think that they had actually left your main site.
5. Prevent errors
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Translation for B2B websites: The best defense against errors is to avoid them in the first place. When you design carefully and mindfully for the user experience, errors don’t pop up much. This may require several iterations of usability testing and improvements to your site.
Here are five errors that are easily preventable:
Typing the wrong info in a web form
This is accentuated when, in an effort to make forms clean and sleek, field names are placed inside the fields themselves. Once the buyer clicks on the field, they need to remember what’s supposed to be there.
Make it easy on your buyers. Don’t stretch their short-term memory; put field names outside of the form.
Causing users to mistakenly perceive a “wrong” click
When a call-to-action button doesn’t resemble landing page language (has no “scent“), it causes doubt or frustration. Take this example:
The landing page has nothing to do with the expectation the call to action created; hence, it’s easy for the user to think they made an error.
Let’s look at Simply Business’ flow to invite users to request insurance quotes. First, this is the call-to-action in their homepage:
And this is the landing page where you land:
There are good things and bad things here. Two stand out:
Good: There’s helpful explainer text for users if they have a question about the form.
Bad: The headline doesn’t provide a message match to the call to action on the homepage. (It doesn’t provide any useful information at all.)
Failing to include autocomplete for site search
Search boxes are other places where users make common mistakes. The “auto recommendation” feature can work wonders for users. Take Google, for example. Every time you enter a short- or long-tail keyword, Autocomplete shows matches to..
Links to blog posts or long-form resources increase their search visibility and build awareness. They also help sites rank for bottom-of-funnel terms—a rising tide lifts all boats.
Some content marketers have it “easy,” working in highly visual industries (e.g. food, fashion) with wide appeal. That simplifies content creation and link building compared to, say, trying to promote niche B2B software.
Given the potential benefits and challenges in SaaS, who’s doing it well? To find out, I ran a study to benchmark content marketing performance for 500 SaaS companies.
While the initial research covered many elements—and focused mostly on numbers—this article reveals the strategies that led to successful link building.
Data and methodology
My initial research found that, on average, the top-performing articles by major SaaS companies generated backlinks from just 9 referring domains.
In this post, I focus on a subset of that data—55 articles that significantly outperformed the rest of the field. These articles generated three times the average number of backlinks (at least 27 referring domains).
I also filtered posts to include only those that contributed at least 5% of the total links to the site for pages other than the homepage.
This filtering helps control for site size—a post on NYTimes.com that earns 100 links isn’t noteworthy, whereas one on a personal blog that earns 20 may be exceptional.
Excluding the homepage is a quick way to remove a major, non-content outlier that would otherwise skew the total link count.
By assessing the strategies behind those 55 articles, I’ve identified five shared features. If you’re creating content for your SaaS company, these are the themes and practical ideas to add to your content calendar.
5 ways that SaaS content earns links
1. Become a point of reference.
Eight of the articles in this list (15%) are original research. There’s a mix of infographics and in-depth studies, all of which generate lots of inbound links.
According to a 2017 study, research is the most efficient type of content for earning backlinks. A 2018 study found that 74% of marketers who conducted original research saw increased website traffic as a result (though only 49% reported generating backlinks).
This hybrid blog post and infographic presents the results of Keeper’s study that assessed the most common passwords in 2016. (They sourced their data from recent data breaches.) The post now accounts for 39% of all referring domains to pages other than the homepage.
Why did it succeed?
Smartly sourced, hard-to-find data. Passwords, of course, are usually private. The study reveals rarely unearthed data that people are naturally curious about. Using data from a data breach as a source was also a clever way to conduct research quickly and cheaply.
Ranked answers.Humans love rankings. A random list of “common passwords” wouldn’t have had the same impact as the rank-order version. Does it matter which is fourth versus sixth? Nope. But it’s a more engaging way to frame the content.
What could they have done better?
The visual presentation isn’t impressive. The main report is a clickable link to a PDF, which doesn’t make much sense. For one, many links and visits may go to the PDF version (rather than the HTML version, which includes more company info, navigation, calls to action, etc.). Second, a PNG would’ve made the pseudo-infographic embeddable on other sites to help earn even more links and referral traffic.
There’s no segmentation of data. In all likelihood, the data breach contained more than just passwords—it probably contained emails and, perhaps, addresses, too. Additional variables would enable additional reports (i.e. more pages to link to) or more targeted versions to increase the content’s appeal (e.g. “What are the most common passwords in Canada?”).
This is another example of a hybrid blog post and infographic. It presents the results of PROMO’s study looking at the video habits of 500 people across all age ranges, and the post now accounts for 10% of all referring domains to pages other than the homepage.
Why did it succeed?
Visuals for each statistic. In addition to the large infographic at the start of the article, PROMO created visuals for each statistic that can easily be embedded on other sites. This means there’s three different ways to link to this article: quoting statistics; embedding the infographic; and embedding images for certain statistics.
Video is a hot topic. For marketers, video is having its moment. This Google Trends chart shows how searches for video marketing have increased over the past five years, and the arrow indicates when this article was published—right before peak interest.
What could they have done better?
There’s no segmentation of data. This article looked at video viewing habits of people globally, from teenagers to seniors. Breaking down this data into smaller subsets might reveal additional insights (e.g. how video habits vary by age, gender, or region).
No extra statistics. All of the stats in the article are covered in the infographic, which is shown at the start of the post. While the article goes into more detail about each statistic, everything’s been covered before you get to the bulk of the copy. For users, there’s little motivation to spend time reading the article.
Takeaway: You can scale your research project to your budget.
The research studies I looked at varied in scope: PROMO assessed 500 people’s video habits; Keeper analysed 10 million passwords. As another example, Gong used AI to analyse 519,000 discovery calls to understand what drives success. All of these projects generated backlinks.
You don’t need an expansive study to generate links. Even small studies can build credibility for your company as an authoritative source on a topic.
The main drawback of research studies? They age. Over time, statistics become less relevant; the links you earn in 2019 could go to another, more recent study in 2020. If you invest in original research, consider a topic that:
Has enduring interest;
Is feasible to update annually.
2. Share others’ research.
An additional seven articles (13%) share other people’s research as infographics, statistical roundups, or text commentary.
You might assume that most people would link directly to the original research, but this shows that, for the purpose of earning links, aggregating research from multiple sources may be just as effective as doing your own.
This article collates 22 statistics about mobile payments from 13 sources, covering security, users, market share, and global statistics. It accounts for 7% of all referring domains pointing to pages other than the homepage.
Why did it succeed?
Built credibility by referencing multiple studies. By bringing together relevant statistics from a number of sites, BlueSnap delivered a more credible resource—you don’t need to scour the web to find out if a single research study is corroborated (or debunked) by other studies.
Took advantage of human laziness. If you’re writing about this topic, it’s much easier to link straight to this list three or four times than click through to each original source. For example, articles by ConversionFanatics and Fourth Source both reference statistics collated by BlueSnap, linking to this article instead of the original sources.
What could they have done better?
Visual presentation is uninspiring. This article is all text. There are no visual elements to add interest. Illustrating statistics would make them more shareable and be an additional incentive to link to their article instead of the original sources.
Similar to the article above, this one collates 10 statistics about year-end donations from various sources. It accounts for 17% of all referring domains aside from the homepage.
Why did it succeed?
Visuals for each statistic. With a custom image for each statistic, this article looks more like a piece of original research than a list of stats. These visuals can be embedded on other sites, providing another way of linking to this article beyond simple text.
Looks like the original source. Many websites cite Neon not just with a link but in the anchor text as well. Some go so far as to credit Neon (erroneously) as the source of the research:
What could they have done better?
More data. Ten statistics is a good starting point—it delivered lots of links for Neon, but it’s still a small number, and the data is growing older by the minute. (It was originally published in 2016.) Updated, expanded statistics could justify another round of promotion and keep the post current.
Takeaway: No research? No problem.
If you don’t have the resources to conduct original research, aggregating a list of reputable industry statistics might be the next best thing. You can reap all the benefits of original research—without actually doing any.
Statistical roundups are valued resources that build credibility, sometimes at the expense of those who ran the original research studies.
3. Make the news—for better or worse.
My SaaS content marketing study found that PR-style content received twice as much organic traffic as blogs that focused on educational content. That statistic won’t hold true for mom-and-pop shops. But it does work for big companies whose fortunes qualify as newsworthy events.
Some events that earned links were intentional—acquisitions and funding announcements, for example. Others, like security incidents, were less desirable but nonetheless impactful.
A couple of acquisitions showed up in this data set, attracting coverage in tech press and the wider business press. For example, this article is Datorama’s announcement of their acquisition by Salesforce, and it accounts for 21% of referring domains to their site.
Why did it succeed?
Big name acquirer. With Salesforce as your acquirer, it automatically becomes big news in the world of sales and marketing, at least for a while. It helped that Salesforce linked to Datorama’s page in their article announcing the deal, bringing this piece to the attention of their wider audience.
Direct message from the CEO. Make no mistake: This “article” is a press release. But it’s authored and signed by Ran Sarig, Datorama CEO and co-founder, which makes it more interesting and engaging than a dry, anonymous press release. It almost comes off as an “opinion” piece, embedding the CEO’s take on the acquisition within the page and, thus, turning it into a source for journalists reporting on the acquisition.
What could they have done better?
Logos! This is another all-text article, with no featured image in the post or header. A visual that featured both the Datorama and Salesforce logos could have provided a visual way of communicating the acquisition—ideal for sharing on social media or earning image links.
“There’s no such thing as bad press.” Probably not. But, as far as earning links go, maybe so.
This article details a major security incident, covering the scale of the incident and the impact on customers and their data. This was big news and widely linked to—it accounts for more than one third (36%) of all referring domains, other than those pointing to the homepage.
Even if OneLogin wished they didn’t have to publish such an article, doing so gave them some control over the narrative—and earned plenty of links. (It was a good day for their SEO team, at least.)
Why did it “succeed”?
It was national news. This incident made it onto the BBC website. For many companies, a data breach (hopefully) won’t make it into national news.
Ongoing updates kept it relevant. This security incident was reported on May 31, 2017, with updates shared on June 1 and finally on June 8. This meant affected customers (and news outlets) had one page to refer to for up-to-date information, as the incident was resolved.
What could they have done better?
Customize the design of the page. The audience for this page is unique—customers and news readers concerned about the incident. If they wanted to make the most of a bad situation, they could’ve devoted more page space and copy to talking about the company (in a positive light) and offered a more relevant call to action than “Sign up to receive a newsletter.”
Takeaway: Self-promotional content can work.
Not all links come from educational content. It’s okay to write about your company, and a self-promotional focus can bring in tons of links if your company qualifies as newsworthy—or is acquired or connected to someone who is.
It’s tough to argue that “bad news” is an opportunity. But it does earn links. Quite often, those links come from powerful news outlets.
Whether the news is good (funding, acquisitions) or bad (data breach), here are 10 sites with decent domain ratings (60+) that linked back to articles in this subcategory:
Put these sites on your outreach list if you’re making news.
4. Go bigger or better.
The skyscraper technique is a popular strategy for building links. You find popular content about an important (read: high-volume) subject and create a better version.
There are four key ways to improve upon existing content:
Length. If someone shares a list of 20 must-use tools, make a list of 25.
Newness. Has someone published a roundup of top tips for 2018? Update it for 2019.
Design. Is great information languishing on an ugly blog? Share the same information with stronger visuals or make the post easier to skim/navigate, especially on mobile.
Depth. Go into more detail than the original post—turn a brief paragraph into a full section, substantiate claims with expert opinions or stats, etc.
The examples below focus on two methods: length and depth.
This article is a great example of how to generate backlinks with a longer article. Totaling over 14,000 words, QA Symphony collates a list of more than 100 software testing tools. It accounts for 14% of all referring domains to their site.
Why did it succeed?
Clear, valuable comprehensiveness. This article is by far the most comprehensive list on the first three pages of search results for “best software testing tools.” It covers 100+ tools—more than twice as many as the next-longest list. Longer doesn’t always mean better, but it works for this topic.
What could they have done better?
Keep it updated. This article occupies the third organic spot for “best software testing tools.” Since its 2016 publication, it’s been beaten out by two newer (or at least more recently updated) articles—both of which reference 2019 in their page titles. Updating the title and adding a few new tools could push it to the top of the rankings and generate even more links.
This article from Terminus was published in November 2016 and has generated 247 backlinks from 79 referring domains. Even though it’s a few years old, it still ranks on the first page of Google for “what is abm.”
Why did it succeed?
Subheadings with related keywords. The subheadings in this article all contain other relevant keywords, helping this piece rank for lots of ABM-related searches, as well as making it easier for readers to scan and navigate. That pleases users and earns more “passive” links—citations that accumulate gradually from top rankings, not outreach.
More in-depth than similar articles. I looked at other articles ranking for the same search term that were published between 2014 and 2016. For example, this article by Salesforce is shorter than the Terminus article (762 words vs. 1,213) and focuses almost entirely on what you do in account-based marketing, rather than how and why you’d do it.
What could they have done better?
Keep it updated. One of the newer articles out-ranking the Terminus post is this article by HubSpot. Published in 2019, it also goes more in-depth, covering the same ground while adding a closing section that includes steps to launch an ABM campaign.
Takeaway: The skyscraper technique works—if you have a plan.
If you’re looking to “steal” links from existing content, it’s not enough simply to create a post that’s longer/newer/better. To generate links for your new content, you need to invest time and effort in the outreach. To quote Ahrefs:
The key to successful execution of the Skyscraper Technique is email outreach. But instead of spamming every blogger you know, you reach out to those who have already linked to the specific content you improved upon. The idea is this: since they’ve already linked to a similar article, they are more likely to link to one that is better.
Something else to keep in mind: The initial bump you might get from outdoing others will only continue if you keep your content..
It’s easy to say that they help you see what users are doing on your site. Sure, of course—but lots of other methods do that too, and perhaps with greater accuracy.
So what can heat maps answer?
What is a heat map?
Heat maps are visual representations of data. They were developed by Cormac Kinney in the mid-1990s to try to allow traders to beat financial markets.
In our context, they let us record and quantify what people do with their mouse or trackpad, then they display it in a visually appealing way.
“Heat maps” are actually a broad category that may include:
Hover maps (mouse-movement tracking);
To make accurate inferences for any of the above heat-map types, you should have enough of a sample size per page/screen before you act on results. A good rule of thumb is 2,000–3,000 pageviews per design screen, and also per device (i.e. look at mobile and desktop separately). If the heat map is based on, say, 50 users, don’t trust the data.
Since there are a few different types of heat maps, let’s go over each and the value they offer.
1. Hover maps (Mouse-movement tracking)
When people say “heat map,” they often mean hover map. Hover maps show you areas where people have hovered over a page with their mouse cursor. The idea is that people look where they hover, and thus it shows how users read a web page.
Hover maps are modeled off a classic usability testing technique: eye tracking. While eye tracking is useful to understand how a user navigates a site, mouse tracking tends to fall short because of some stretched inferences.
The accuracy of mouse-cursor tracking is questionable. People might be looking at stuff that they don’t hover over. They may also hover over things that get very little attention—therefore, the heat map would be inaccurate. Maybe it’s accurate, maybe it’s not. How do you know? You don’t.
Only 6% of people showed some vertical correlation between mouse movement and eye tracking.
19% of people showed some horizontal correlation between mouse movement and eye tracking.
10% hovered over a link and then continued to read around the page looking at other things.
We typically ignore these types of heat maps. Even if you do look at it to see if it supports your suspicions, don’t put too much stock in it. Guy Redwood at Simple Usability is similarly skeptical about mouse tracking:
We’ve been running eye tracking studies for over 5 years now and can honestly say, from a user experience research perspective, there is no useful correlation between eye movements and mouse movements – apart from the obvious looking at where you are about to click.
If there was a correlation, we could immediately stop spending money on eye tracking equipment and just use our mouse tracking data from websites and usability sessions.
Hence why Peep calls these maps “a poor man’s eye-tracking tool.”
Without much overlap between what these maps show and what users do, it’s tough to infer any actual insights. You end up telling more stories to explain the images than actual truths. This blog post criticizing heat maps for soccer players’ movements puts it well:
“What do heat maps do? They give a vague impression of where a player went during the match. Well, I can get a vague impression of where a player went during a match by watching the game over the top of a newspaper.”
While some studies indicate higher correlations between gaze and cursor position, ask yourself if the possible insights are worth the risk of misleading data or encouraging confirmation bias in the analysis.
What about algorithm-generated heat maps?
Similarly, there are heat map tools that use an algorithm to analyze your user interface and generate a resulting visual. They take into account a variety of attributes: colors, contrast, visual hierarchy, size, etc. Are they trustworthy? Maybe. Here’s how Aura.org put it:
Visual Attention algorithms, where computer software “calculates” the visibility of the different elements within the image, are often sold as a cheaper alternative. But the same study by PRS, showed that the algorithms are not sensitive enough to detect differences between designs, and are particularly poor at predicting the visibility levels of on-pack claims and messaging.
(Note: PRS, the other study cited above, sells eye-tracking research services.)
While you shouldn’t fully place your trust in algorithmically generated maps, they’re not any less trustworthy than hover maps.
And, if you have lower traffic, algorithmic tools can give you some visual data for usability research, including instant results, which is cool. Some tools to check out:
There’s a lot of communicative value in these maps. They help demonstrate the importance of optimization (especially to non-optimizers) and what is and isn’t working.
Does a big photo get lots of clicks but isn’t a link? You have two options:
Make it into a link.
Don’t make it look like a link.
It’s also easy to take in aggregate click data quickly and see broad trends. Just be careful to avoid convenient storytelling.
However, you can also see where people click in Google Analytics, which is generally preferable. If you’ve set up enhanced link attribution, the Google Analytics overlay is great. (Some people still prefer to see a visual click map).
And, if you go to Behavior > Site Content > All pages, and click on a URL, you can open up the Navigation Summary for any URL—where people came from and where they went afterward. Highly useful stuff.
3. Attention maps
An attention map is a heat map that shows you which areas of the page are viewed the most by the user’s browser, with full consideration of the horizontal and vertical scrolling activity.
They show which areas of the page have been viewed the most, taking into account how far users scroll and how long they spend on the page.
Peep considers attention maps more useful than other mouse-movement or click-based heat maps. Why? Because you can see if key pieces of information—text and visuals—are visible to almost all users. That makes it easier to design pages with the user in mind.
Here’s how Peep put it:
“What makes this useful is that it takes account different screen sizes and resolutions, and shows which part of the page has been viewed the most within the user’s browser. Understanding attention can help you assess the effectiveness of the page design, especially above the fold area.”
4. Scroll Maps
Scroll maps are heat maps that show how far people scroll down on a page. They can show you where users tend to drop off.
While scroll maps work for any length of page, they’re especially pertinent when designing long-form sales pages or longer landing pages.
Generally, the longer the page, the fewer people will make it all the way to the bottom. This is normal and helps you prioritize content: What’s must have? What’s just nice to have? Prioritize what you want people to pay attention to and put it higher.
Scroll maps can also help you tweak your design. If the scroll map shows abrupt color changes, people may not perceive a connection between two elements of your page (“logical ends”). These sharp drop-off points are hard to see in Google Analytics.
On longer landing pages, you might need to add navigation cues (e.g. a downward arrow) where the scrolling stops.
Bonus: User session replays
Session replays aren’t a type of heat map per se, but they are one of the most valuable bits that heat mapping tools offer.
User session replays allow you to record video sessions of people going through your site. It’s like user testing but without a script or audio. Also unlike user testing—in a positive way—is that people are risking actual money, so it can be more insightful.
Unlike heat maps, this is qualitative data. You’re trying to detect bottlenecks and usability issues. Where are people not able to complete actions? Where do they give up?
One of the best use cases for session replays is watching how people fill out forms. Though you could configure Event tracking for Google Analytics, it wouldn’t provide the level of insight as in user session replays.
Also, if you have a page that’s performing badly and you don’t know why, user session replays may identify problems. You can also see how fast users read, scroll, etc.
Analyzing them is, of course, timely. We spend half a day watching videos for a new client site. And after looking at hundreds (thousands?) of heat maps and reviewing other studies, we’ve identified some recurring takeaways from heat maps of all kinds.
19 things we’ve learned from heat-map tests
We’ve looked at a lot of heat maps over the years. So have other researchers. And while every site is different (our perpetual caveat), there are some general takeaways.
You should test the validity of these learnings on your site, but, at the very least, these generalized “truths” should give you an idea of what you can expect to learn from a heat map.
1. The content that’s most important to your visitors’ goals should be at the top of the page.
People do scroll, but their attention span is short. This study found that a visitor’s viewing time of the page decreases sharply when they go below the fold. User viewing time was distributed as follows:
Above the fold: 80.3% Below the fold: 19.7%
The material that’s most important to your business goals should be above the fold.
In the same study, viewing time increased significantly at the very bottom of the webpage, which means that a visitor’s attention goes up again at the bottom of the page. Inserting a good call to action there can drive up conversions.
You should also remember the recency effect, which states that the last thing a person sees will stay on their minds longer. Craft the end of your pages carefully.
A Caltech neuroscience study showed that at “rapid decision speeds” (when in a rush or when distracted), visual impact influences choices more than consumer preferences do.
When visitors are in a hurry, they’ll think less about their preferences and make choices based on what they notice most. This bias gets stronger the more distracted a person is and is particularly strong when a person doesn’t have a strong preference to begin with.
If the visual impact of a product can override consumer preferences—especially in a time-sensitive and distracting environment like online shopping—then strategic changes to a website’s design can seriously shift visitor attention.
3. People spend more time looking at the left side of your page.
Several studies have found that the left side of the website gets a bigger part of your visitors’ attention. The left side is also looked at first. There are always exceptions, but keeping the left side in mind first is a good starting point. Display your most important information there, like your value proposition.
This study found that the left side of the website received 69% of the viewing time—People spent more than twice as much time looking at the left side of the page compared to the right.
4. People read your content in an F-shaped pattern.
This study found that people tend to read text content in an F-shaped pattern. What does that mean? It means that people skim, and that their main attention goes to the start of the text. They read the most important headlines and subheadlines, but read the rest of the text selectively.
Your first two paragraphs need to state the most important information. Use subheadings, bullet points, and paragraphs to make the rest of your content more readable.
Note that the F-pattern style does not hold true when browsing a picture-based web page, as is evident in this study. People tend to browse image-based web pages horizontally.
Banner blindness happens when your visitor subconsciously or consciously ignores a part of your webpage because it looks like advertising. Visitors almost never pay attention to anything that looks like an advertisement.
This study found no fixations within advertisements. If people need to get information fast, they’ll ignore advertising—and vice versa. If they’re completely focused on a story, they won’t look away from the content.
There are several ways to avoid creating banner blindness on your website. Most problems can be prevented by using a web design company that’s experienced in online marketing.
6. When using an image of a person, it matters where they look.
It makes sense to use people in your design—it’s a design element that attracts attention. But it also matters where their eyes are looking.
Several heat map studies have shown that people follow the direction of a model’s eyes. If you need to get people to focus not only on the beautiful woman but the content next to her, make sure she’s looking at that content.
It’s also important to convey emotion. Having a person convey emotion can have a big impact on conversion rates. This study found that a person conveying emotion can have a larger impact on conversions than a calm person looking at the call to action.
Your best option may be to combine these two approaches—use an emotion-conveying person who’s also looking at the desired spot on the page.
7. Men are visual; women seek information.
When asked to view profiles of people on a dating site, this study found a clear difference between men and women. Men were more visual when looking at a profile of a person, focusing on the images; women tended to read more of the info provided.
In another study, men spent 37% more time looking at the woman’s chest than women did, whereas women spent 27% more time looking at the ring finger. The study concluded, that “men are pervs, women are gold-diggers.”
8. Abandon automatic image carousels and banners for better click-through rates.
This study concluded that, on two sites where users had a specific task on their mind, the main banners were completely ignored, including the animated version. Automatic image carousels and banners are generally not a good idea. They generate banner blindness and waste a lot of space.
The same study found an exception to this rule in of the sites—a banner on ASOS’s homepage that captured the attention of participants better than the other sites. How was it different? It looked less like a banner and was better integrated into the page.
After testing a landing page with heat maps, TechWyse found out just how important color contrast is. A non-clickable, informational element about pricing on the homepage won the most attention because of its color contrast with the surrounding area.
After a slight redesign, the scanning patterns of visitors aligned with what the company needed.
10. 60-year-olds make twice as many mistakes as 20-year-olds.
When your target audience is elderly, make your website as easy to use and clutter-free as possible. When testing 257 correspondents in a remote user test, the failure rate for tasks was 1.9 times greater..
Email is one of the few marketing channels that spans the full funnel. You use email to raise awareness pre-conversion. To stay connected with content subscribers. To nurture leads to customers. To encourage repeat purchases or combat churn. To upsell existing customers.
Getting the right email to the right person at the right time throughout the funnel is a massive undertaking that requires a lot of optimization and testing. Yet, even some mature email marketing programs remain fixated on questions like, “How can we increase the open rate?” Moar opens! Moar clicks!
What about the massive bottom-line impact email testing can have at every stage of the funnel? How do you create an email testing strategy for that? It starts by understanding where email testing is today.
The current state of email testing
According to the DMA, 99% of consumers check their email every single day. (Shocking, I know.)
In 2014, there were roughly 4.1 billion active email accounts worldwide. That number is expected to increase to nearly 5.6 billion before 2020. In 2019, email advertising spending is forecasted to reach $350 million in the United States alone.
0.29% is the average email unsubscribe rate in the architecture and construction industry.
1.98% is the average email click rate in the computers and electronics industry.
0.07% is the average hard bounce rate in the daily deals and e-coupons industry.
20.7% is the average open rate for a company with 26–50 employees.
But why are these statistics the ones we collect? Why do blog posts and email marketing tools continue to prioritize surface-level testing, like subject lines (i.e. open rate) and button copy (i.e. click rate)?
Email testing tools offer testing of basic elements that, quite often, fails to connect to larger business goals.
Why email testing often falls flat
Those data points from AWeber and Mailchimp are perhaps interesting, but they have no real business value.
Knowing that the average click rate in the computers and electronics industry is 1.98% is not going to help you optimize your email marketing strategy, even if you’re in that industry.
Similarly, knowing that 434 is the average number of words in an email is not going to help you optimize your copy. That number is based on only 1,000 emails from 100 marketers. And, of course, there’s no causal link. Who’s to say length impacts the success of the emails studied?
For the sake of argument, though, let’s say reading that 60% of email marketers use sentence case in their subject lines inspired you to run a sentence case vs. title case subject-line test.
Congrats! Sentence case did in fact increase your open rate. But why? And what will you do with this information? And what does an open rate bump mean for your click rate, product milestone completion rates, on-site conversion rates, revenue, etc.?
A test is a test is a test. Regardless of whether it’s a landing page test, an in-product test, or an email test, it requires time and resources. Tests are expensive—literally and figuratively—to design, build, and run.
Focusing on top-of-funnel and engagement metrics (instead of performance metrics) is a costly mistake. Open rate to revenue is a mighty long causal chain.
If you’re struggling to connect email testing and optimization to performance marketing goals, it’s a sign that something is broken. Fortunately, there’s a step-by-step process you can follow to realign your email marketing with your conversion rate optimization goals.
The step-by-step process to testing email journeys
Whenever you’re auditing data, ask yourself two questions:
Am I collecting all of the data I need to make informed decisions?
Can I trust the data I’m seeing?
To answer the first question, have your optimization and email teams brainstorm a list of questions they have about email performance. After all, email testing should be a collaboration between those two teams, whether an experimentation team is enabling the email team or a conversion rate optimization team is fueling the test pipeline.
Can your data, in its current state, answer questions from both sides? (Don’t have a dedicated experimentation or conversion rate optimization team? Email marketers can learn how to run tests, too.)
With email specifically, it’s important to have post-click tracking. How do recipients behave on-site or in-product after engaging with each email? Post-click tracking methods vary based on your data structure, but there are five parameters you can add to the URLs in your emails to collect data in Google Analytics:
UTM parameters connect in-email behavior to on-site behavior. (Image source)
The second issue—data integrity—is more complex and beyond the scope of this post. Thankfully, we have another post that dives deep into that topic.)
Once you’re confident that you have the data you need and that the data is accurate, you can get started.
1. Mapping the current state
To move away from open rate and click rate as core metrics is to move toward journey-specific metrics, like:
Gross customer adds;
By focusing on the customer journey instead of an individual email, you can make more meaningful optimizations and run more impactful tests.
The goal at this stage is to document and visualize as much as you can about the current state of the email journey in question. Note any gaps in your data as well. What do you not know that you wish you did know?
It all starts with a deep understanding of the current state of the email journey in question. You can use a tool like Whimsical to map it visually.
An example from Whimsical of how to map a user flow. While their example maps on-site behavior, a similar diagram works for email, too. (Image source)
There are a ton of different asks within this email. Tutorials, a resource center, three different product options, training and certification, a partner network—the list goes on.
Your current state map should show how recipients engage with each of those CTAs, where each CTA leads, how recipients behave on-site or in-product, etc. Does the next email in the sequence change if a recipient chooses “Launch a Virtual Machine” instead of “Host a Static Website” or “Start a Development Project,” for example?
Your current state map will help answer questions like:
How does the email creative and copy differ between segments?
Who receives each email and how is that decision made?
Which actions are recipients being asked to take?
Which actions do they take most often?
Which actions yield the highest business value?
How frequently are they asked to take each action and how quickly do they take it on average?
What other emails are these recipients likely receiving?
What on-site and in-product destinations are email recipients being funneled to?
What gaps exist between email messaging and the on-site or in-product messaging?
Where are the on-site holes in the funnel?
Can post-email, on-site, or in-product behavior tell us anything about our email strategy?
2. Mapping the ideal state
Once you know what’s true now, it’s time to find optimization opportunities, whether that’s an obvious fix (e.g. an email isn’t displaying properly on the iPhone 6) or a test idea (e.g. Would reducing the number of CTAs in the AWS email improve product milestone completion rates?).
There are two methods to find those optimization opportunities:
Quantitatively. Where are recipients falling out of the funnel, and which conversion paths are resulting in the highest customer lifetime value (CLTV)?
Qualitatively. Who are the recipients? What motivates them? What are their pain points? How do they perceive the value you provide? What objections and hesitations do they present?
The first method is fairly straightforward. Your current state map should present you with all of the data you need to identify holes and high-value conversion paths.
Combined, these two methods will give you a clear idea of your ideal state of the email journey. As best you can, map that out visually as well.
How does your current state map compare to your ideal state map? They should be very different. It’s up to you to identify and sort those differences:
What did you learn during this entire journey mapping process that other marketers and teams will find useful?
What needs to be fixed or implemented right away? This is a no-brainer that doesn’t require testing.
What needs to be tested before implementation? This could be in the form of a full hypothesis or simply a question.
What gaps exist in your measurement strategy? What’s not being tracked?
3. Designing, analyzing, and iterating
Now it’s time to design the tests, analyze the results, and iterate based on said results. Luckily, you’re reading this on the CXL blog, so there’s no shortage of in-depth resources to help you do just that:
Common pitfalls in email testing—and how to avoid them
1. Testing the email vs. the journey
It’s easier to test the email than the journey. There’s less research required. The test is easier to implement. The analysis is more straightforward—especially when you consider that there’s no universal customer journey.
Sure, there’s the nice, neat funnel you wax poetic about during stakeholder meetings: session to subscriber, subscriber to lead, lead to customer; session to add to cart, cart to checkout, checkout to repeat purchase. But we know that linear, one-size-fits-all funnels are a simplified reality.
When presented with the choice of running a simple subject line A/B test in your email marketing tool or optimizing potentially thousands of personalized customer journeys, it’s unsurprising many marketers opt for the former.
But remember that email is just a channel. It’s easy to get sucked into optimizing for channel-level metrics and successes, to lose sight of what that channel’s role is in the overall customer journey.
Now, let’s say top-of-funnel engagement metrics are the only email metrics you can accurately measure (right now). You certainly wouldn’t be alone in that struggle. As marketing technology stacks expand, data becomes siloed, and it can be difficult to measure the end-to-end customer journey.
Is email testing still worth it, in that case?
It’s a question you have to ask yourself (and your data). Is there an inherent disadvantage to improving your open rate or click rate? No, of course not (unless you’re using dark patterns to game the metrics).
The question is: is the advantage big enough? Unless you have an excess of resources or are running out of conversion points to optimize (highly unlikely), your time will almost certainly be better spent elsewhere.
2. Optimizing for the wrong metrics
Optimization is only as useful as the metric you choose. Read that again.
All of the research and experimentation in the world won’t help you if you focus on the wrong metrics. That’s why it’s so important to go beyond boosting your open rate or click rate, for example.
It’s not that those metrics are worthless and won’t impact the bigger picture at all. It’s that they won’t impact the bigger picture enough to make the time and effort you invest worth it. (The exception being select large, mature programs.)
Val Geisler of Fix My Churn elaborates on how top-of-funnel email metrics are problematic:
Most people look at open rates, but those are notoriously inaccurate with image display settings and programs like Unroll.me affecting those numbers. So I always look at the goal of the individual email.
Is it to get them to watch a video? Great. Let’s make sure that video is hosted somewhere we can track views once the click happens. Is it to complete a task in the app? I want to set up action tracking in-app to see if that happens.
It’s one thing to get an email opened and even to see a click through, but the clicks only matter if the end goal was met.
You get the point. So, what’s a better way to approach email marketing metrics and optimization? By defining your overall evaluation criterion (OEC).
To start, ask yourself three questions:
What is the tangible business goal I’m trying to achieve with this email journey?
What is the most effective, accurate way to measure progress toward that goal?
What other metric will act as a “check and balance” for the metric from question two? (For example, a focus on gross customer adds without an understanding of net customer adds could lead to metric gaming and irresponsible optimization.)
The question is what OEC should be used for these programs? The initial OEC, or “fitness function,” as it was called at Amazon, gave credit to a program based on the revenue it generated from users clicking-through the e-mail.
There is a fundamental problem here: the metric is easy to game, as the metric is monotonically increasing: spam users more, and at least some will click through, so overall revenue will increase. This is likely true even if the revenue from the treatment of users who receive the e-mail is compared to a control group that doesn’t receive the e-mail.
Eventually, a focus on CLTV prevailed:
The key insight is that the click-through revenue OEC is optimizing for short-term revenue instead of customer lifetime value. Users that are annoyed will unsubscribe, and Amazon then loses the opportunity to target them in the future. A simple model was used to construct a lower bound on the lifetime opportunity loss when a user unsubscribes. The OEC was thus
Where 𝑖 ranges over e-mail recipients in Treatment, 𝑗 ranges over e-mail recipients in Control, and 𝑠 is the number of incremental unsubscribes, i.e., unsubscribes in Treatment minus Control (one could debate whether it should have a floor of zero, or whether it’s possible that the Treatment actually reduced unsubscribes), and unsubscribe_lifetime_loss was the estimated loss of not being able to e-mail a person for “life.”
Using the new OEC, Ronny and his team discovered that more than 50% of their email marketing programs were negative. All of the open- rate and click- rate experiments in the world wouldn’t have addressed the root issue in this case.
Instead, they experimented with a new unsubscribe page, which defaulted to unsubscribing recipients from a specific email program vs. all email communication, drastically reducing the cost of an unsubscribe.
Amazon learned that creating multiple lists (rather than a single “unsubscribe”) was key to increasing CLTV.
3. Skimping on rigor
Email marketing tools make it easy to think you’re running a proper test when you’re not.
Email tests require the same amount of rigor and scientific integrity as any other test, if not more. Why? Because there are many little-known nuances to email as a channel that don’t exist on-site, for example.
Too many people jump to make changes too soon. Email should be tested for a while (every case varies, of course), and no other changes should be made during that test period.
I have people tell me they changed their pricing model or took away the free trial or did some other huge change in the midst of testing email campaigns. Well that changes everything! Test email by itself to know if it works before changing anything..
Has your company’s customer retention rate increased, decreased, or maintained the status quo over the past five years?
Are you actively working on retention? Have you outlined and initiated a formal customer retention strategy?
A study by Harvard Business School found that increasing customer retention by even 5% can increase profits by 25–95%. And yet, the 2019 CMO Survey found that nearly half of CMOs don’t expect to improve retention this year.
Compare that to more than two-thirds of CMOs who expect to increase customer acquisition, increased purchase volume, and more effective cross-selling:
That’s too bad. Because a Manta report found that 61% of small businesses surveyed indicated that more than half of their revenue came from repeat customers. Furthermore, the study found that repeat customers spend 67% more than new customers.
I can safely say that about three-quarters of my clients did not have a formal strategy to retain and cultivate their current customers prior to hiring us.
Many also felt that they understood their industry, target markets, and trends. However, when we conducted our research, we found plenty of areas that had evolved or changed.
Opinions don’t get people back. Understanding the data about what they need to return does. To develop your customer retention strategy, follow this four-phase process:
Research your customers to find out what they need most.
Develop the product, site, and offers based on existing customer feedback.
Evaluate whether a loyalty or rewards program will drive repeat business.
Make your retention strategy personal.
1. Research your customers to find out what they need most.
A HubSpot survey found that companies that put data at the core of their marketing/sales decisions improved marketing ROI by 15–20%.
The same article noted that companies spending 30% more time analyzing marketing performance data earned 3X higher open rates and 2X click through rates for email. ( should be about far more than opens and clicks, though.)
So what kind of marketing data should you analyze if you want to improve customer retention?
UX data. If the shopping experience is full of friction, why would anyone return?
Specific issues active customers weren’t telling them about;
Hangups in the user experience;
Workflow inefficiencies for use cases they hadn’t considered.
What’s more is that an A/B test of the message—changing “Why did you cancel?” to “What made you cancel?”—provided a near 19% response rate.
“Since we’ve started doing open-ended exit surveys eight months ago, we’ve been able to make a lot of positive changes and fixes to Groove. Retention, along with many of our usage metrics, have improved as a result of some of these changes.
We’ve even started testing recovery campaigns for former customers whose issues we’ve fixed; I’ll write about that in a future post, but the early results are very promising.”
Canceling customers, of course, are far from the only group that will provide insight.
2. Develop the product, site, and offers based on existing customer feedback.
Your existing customer base will tell you a lot about what they need and want to keep coming back.
For instance, HubSpot Ideas is a forum for feature request—users can submit and upvote ideas, helping HubSpot understand which development projects may have the highest existing demand.
Without these methods of collecting feedback, future improvements would be mostly guesswork, severely reducing the chances of actually solving critical problems that keep users engaged or coming back (not to mention that forum’s like HubSpot’s yield qualitative insights for free).
It’s not just SaaS sites that can take advantage of customer feedback forums and in-line customer support either.
Case study: How Terminix used customer feedback to recover $20 million in lost revenue
Terminix is the world’s largest pest control company, with more than 2.8 million customers spread across 47 U.S. states and 11 countries.
Over the years, their acquisition campaigns have succeeded by combining humor with a serious tone that lets you know that they take pest control seriously.
But as successful as their acquisition strategies were, too many customers cancelled—they were losing a third of their clients (or approximately $60 million) annually.
They hired Chief Outsiders to analyze exiting customer data along with other customer satisfaction information. They discovered issues stemming from three areas:
In response, the company initiated a new training program for employees that focused on retaining customers and overcoming easy objections. They also incorporated a satisfaction survey program to gain a fresh perspective on new customers’ needs and desires.
This also led to a change in Terminix’s product offering—offering quarterly and annual programs instead of their previous “monthly only” service.
The result? Customer turnover dropped by one third, which translated into approximately $20 million recovered in annual revenue.
3. Evaluate whether a loyalty or rewards program will drive repeat business.
Beyond delivering a better customer experience based on feedback, what are other ways to increase customer retention?
A loyalty program might seem like a no-brainer, and, increasingly, companies are adopting them—some 86% of small businesses had them in 2018, up from just 66% a few years ago.
For many, the benefits of a loyalty program might include:
But it’s not as simple as tacking on a loyalty program and expecting customers to start “living” at your store. (Indeed, according to the 2018 study above, only one in four companies with a loyalty program enrolls at least half their customers.)
For industries with thin profit margins, offering an incentive like 2% off isn’t very enticing, and in many verticals such an offer might require a significant lift in sales to break even.
The major issue with many loyalty and rewards programs is that there’s no real differentiation—nothing there to make the customer feel special. As a result, it’s easy to take it or leave it.
Perhaps that’s why Amazon Prime has been so successful.
The benefits to members continue to increase (the image above is ample evidence), and buyers have rewarded Amazon with their loyalty. The gap in spending between a Prime and non-Prime member is remarkable:
Starbucks might also be onto something with Starbucks Rewards. By using their loyalty card to make purchases on their website or at a store, you earn “Star Points.”
The more points you earn, the better perks you get:
From Starbucks perspective, I imagine this is a pretty significant win because many of the rewards are being offered after they’ve made a good profit off the customer.
Further, they get tons of data about what you buy and when you buy it. The “custom” offers you get can easily be personalized prompts to encourage you to go back when, algorithmically, your loyalty appears to waver.
If you’re looking to start a loyalty program, you’d better run the numbers—in granular detail— first. Then, when you unroll the program, start by targeting your most frequent buyers first. Listen to their feedback and develop the program based on their feedback.
60% of all customers stop dealing with a company because of what they perceive as indifference on the part of salespeople.
70% of customers leave a company because of poor service.
80% of defecting customers describe themselves as “satisfied” or “very satisfied” just before they leave. (Surveys alone won’t tell you everything.)
In your retention efforts, focus your communications with existing customers around how they would like to be viewed.
Greg Ciotti (again!) talks about the concept of implicit egoism. As it relates to consumerism, it’s the idea that brand choices are tied to personal identity.
Purchasing a luxury vehicle like a Mercedes-Benz, for example, is a status symbol, and makes their customer feel more elite.
Their customers are so elite that if they want to get into a new Mercedes when their lease is expiring, the company will simply waive four payments, and the customer will get the new car right away.
Urban Outfitters on the other hand, uses typography and design to communicate a real hipster vibe.
It’s newsletter doesn’t just deliver sales and promotions but also videos and music from obscure bands.
That level of emotional design aims to build a sense of community, belonging, and attachment to the brand. The emotional connection, in turn, makes it easier and more enjoyable to buy from those stories instead of competitors.
Every company dreams about creating high-performing teams. For us at OWOX, that dream centered on our analytics department, which included 12 specialists—junior analysts, mid-level analysts, senior analysts, and QA specialists.
Collectively, our analysts were responsible for consulting clients on data analysis and improving our marketing analytics tool. While our company focuses on analytics, our challenge was not unique—many in-house marketing departments and agencies struggle to measure and improve the efficiency of their teams.
In theory, our analytics department would work seamlessly to make the whole business profitable. In real life, it often struggled for constant improvement—new people, new business goals, and new technologies disrupted steady progress.
How could we get our analytics team to spend more time doing the things they were best at, as well as those that were most valuable to the company? Here’s the start-to-finish process we implemented to benchmark team performance and then use that data to increase efficiency.
Our baseline: Mapping the ideal analyst workload
What is the most effective mix of responsibilities for each analyst position under perfect conditions? Our first step was to diagram the ideal division of work for our analytics team:
So, for example, in our company, we expected senior analysts to spend:
45% of their time on tasks from clients;
30% of their time on management and coaching;
10% of their time on tech and business education;
10% of their time on process development;
5% of their time on internal tasks.
This ideal task distribution, as we later learned, was far from reality. That gap resulted from eight key challenges faced by our team.
8 ways our analytics team was struggling
A dream team can’t be gathered at once; it can only be grown. Analysts in our analytics department expect to grow professionally and be given a lot of challenging tasks.
To deliver on that promise of professional growth, we had to confront eight key problems facing our team:
1. Inefficient task distribution for each position
At some point, everybody gets sucked into a routine and doesn’t ask if the current way is the only way to do their work efficiently:
Our senior analysts had no time to teach and coach new employees, but they also had no time for managerial tasks because they were overloaded with client work.
Our mid-level analysts didn’t have enough time for R&D and improving their skills.
Our junior analysts were just studying all the time. We hadn’t passed them tasks so that they could dive into real work experience.
Each of these realizations became clear after we visualized the gap between expectations and reality (detailed in the next section).
2. No measurement of efficiency for each team member
We all knew that the ideal workload above was just a model. But how far from this model were we? We didn’t know how much time a particular employee spent in meetings, worked on client tasks, or was busy with R&D.
We also didn’t know how efficiently each analyst performed a task compared to the rest of the team.
3. Incorrect task time estimates
We couldn’t estimate precisely the time needed for each task, so we sometimes upset our clients when we needed more time to finish things.
4. Repeating mistakes
Whenever a junior analyst had to solve a complicated task for the first time, they made the same predictable mistakes. Those mistakes, in turn, had to be identified and corrected by their mentor, a senior analyst, before the tasks could enter production.
Even if they didn’t make any mistakes, it took them longer to complete the task than a middle or senior analyst.
5. Unintentional negligence
Sometimes, client emails would get lost, and we exceeded the response time promised in our service-level agreement (SLA). (According to our SLA, our first response to a client email has to be within four hours.)
6. Speculative upsells
We knew how much time we spent on each task for the client. But this data wasn’t aligned with the billing information from our CRM and finance team, so our upselling was based only on gut feeling.
Sometimes it worked; sometimes it failed. We wanted to know for sure when we should try to upsell and when we shouldn’t.
7. Generic personal development plans
We had the same personal development plan for every analyst, regardless of strengths and weaknesses. But development plans can’t be universal and effective at the same time.
For our analysts, personalization of development plans was key to faster growth.
8. Lack of knowledge transfer
Our senior analysts were swamped with work and had no time to pass their skills and knowledge to their junior colleagues. The juniors grew slowly and made lots of mistakes, while seniors had nobody to pass tasks and responsibilities to.
It was clear we had plenty of room to improve, so we decided to bring all the necessary data together to measure the efficiency of our analysts. Let’s look through these steps in detail.
How we measured the performance of our analytics team
This process started by defining the problems and questions outlined above. To answer them, we knew that we would need to capture before-and-after metrics. (Top of mind were the words of Peter Drucker: “You can’t manage what you can’t measure.”)
Here are the four steps we took to gather the necessary data and create a real-time dashboard for our analytics team.
1. Identify the sources of the data.
Since most of our questions connected to analyst workloads, we gathered data from the tools they were using:
Google Calendar. This data helped us understand how much time was spent on internal meetings and client calls.
Targetprocess. Data from our task-management systemhelped us understand the workload and how each of the analysts managed their tasks.
Gmail. Email counts and response statuses gave us information about analysts, projects, and overall correspondence with clients and the internal team. It was significant for monitoring SLA obligations.
2. Pull the necessary data and define its structure.
We gathered all data from those sources into Google BigQuery using Google Apps Script. To translate data into insights, we created a view with the fields we needed.
Here’s a table showing the fields we pulled into the view:
Our key fields were analyst, date, and project name. These fields were necessary to merge all the data together with correct dependencies. Once the data was ready, we could move on to the dashboard.
3. Prototype the dashboard.
Don’t try to make a dashboard with all the metrics you can imagine. Focus on the essential metrics that will answer your questions—build an MVP, not a behemoth.
Typically, best practices of dashboard prototyping are to:
Define the essential metrics that will answer your questions.
Ensure that KPI calculation logic is extremely transparent and approved by the team.
Prototype on paper (or with the help of prototyping tools) to check the logic.
4. Build the dashboard
We used Google Data Studio because it’s handy, is a free enterprise-level tool, and integrates easily with other Google products.
In Data Studio, you can find templates designed for specific aims and summaries, and you can filter data by project, analyst, date, and job type. To keep the operational data current, we updated it on a daily basis, at midnight, using Apps Script.
Let’s look closer at some pages of our dashboard.
Department workload page
We visually divided this page into several thematic parts:
Task distribution by role;
Time spent by task type.
With this dashboard, we could see how many projects we had at a given time in our analytics department. We could also see the status of these projects—active, on hold, in progress, etc.
Task distribution by role helped us understand the current workload of analysts at a glance. We could also see the average, maximum, and minimum time for each type of task (education, case studies, metrics, etc.) across the team.
Analyst workload page
This page told us what was happening inside the analytics team—time spent by analyst, by task, and by the whole team:
Time spent on tasks and meetings;
Percentage of emails answered according to the SLA;
Percentage of time spent on each task by a given analyst;
Time that a given analyst spent on tasks compared to the team average.
This was useful to understand how much time tasks usually took and whether a specialist could perform a task more efficiently than a junior-level analyst.
Project workload page
This page analyzed the efforts of the whole team and individual analysts at the same time. Metrics included:
Tasks across all projects or filtered by project;
Time spent on meetings and tasks;
Share of emails answered according to the SLA;
Statistics for an individual project (with the help of filters);
Average, minimum, and maximum time for each type of task in a project.
It also included the analyst and backup analyst for each project, as well as the number of projects managed by a given analyst:
We can’t show you all of our dashboards and reports because some contain sensitive data. But with this dashboard in place, we:
Realized that the workload of an analyst is far from what we expected and that average values can hide our growth zones.
Proved that most of our analysts (~85%) answered emails on time.
Mapped the typical tasks that we ran into, how long it usually takes to accomplish them, and how the time for each particular task can vary.
Found weaknesses and strengths for each analyst to customize their personal development plan.
Found areas for automation.
The number of dashboards isn’t as important as seeing the changes we made using them. The latter translated our measurements into strategies for team improvements.
Acting on our data to improve the analytics team
Let’s have a closer look at how we used the dashboard to begin to solve some of the problems we mentioned above.
Improving task distribution for each team member
When we compared the real task distribution with the idealtask distribution, we were, shall we say, disappointed. It was far from perfect.
Our senior analysts worked on client tasks 1.5 times more than planned, and our junior analysts were studying almost all the time without practicing their skills.
We started to improve the situation with a long process of task redistribution. And after some time, we saw improvement:
While everything looked better in the dashboard, we still had room to grow.
By aligning everything to the average values, we were trapped in a typical stats problem: treating the average as the real-world scenario. The average is a mathematical entity, not a reflection of real life. In real life, there’s nothing more blinding than focusing on the average.
When we drilled down to a particular role or analyst, the data looked quite different. Here, for example, we have data for Anastasiia, a senior analyst. On the left is the ideal, in the middle is the average, and on the right is her personal division:
The picture changed dramatically from the senior analyst average and the reality for Anastasiia. The time spent on client tasks was much higher than it should’ve been, and almost no time was spent coaching new employees.
That could be for multiple reasons:
Anastasiia is overloaded with client tasks. In this case, we need to take some of her tasks and pass them to another analyst.
Anastasiia didn’t fill out the task management system properly. If this is the case, we need to draw her attention to its importance.
Anastasiia might not be a fan of her managerial role. We need to talk and figure it out.
We redistributed some of Anastasiia’s tasks and discussed the bottlenecks that were eating the biggest part of her time. As a result, her workload became more balanced.
If we had only looked at the average stats for the department, we never would’ve solved the problem.
Automation and knowledge transfer to minimize mistakes
We had lots of atypical work in our department. That’s why it was hard to predict how long it would take to complete it (and which mistakes would appear).
We started improving our task estimation process by classifying and clustering tasks using tags in our task management system, such as R&D, Case Study, Metrics, Dashboards, and Free (for tasks we didn’t charge for).
When analysts created a new task, they had to define its type using tags. Tagging helped us measure which jobs we ran into most often and decrease repeated mistakes by automating typical reports.
Below, you can see a dashboard showing the minimum, maximum, and average time spent on different types of jobs, as well as their frequency:
This helped us estimate the time required for typical tasks and became a basis for estimating unusual tasks. An average is a useful estimate for a new client, and outliers helped us understand how much time extra features may take.
We also looked closely at the most frequent tasks and those that had the maximum time spent. To eliminate mistakes in these tasks, our first step was to write detailed guides on how to perform each task.
For example, to create a report on cohort analysis, the guide included
What to pay attention to.
These guides helped pass along knowledge and avoid typical mistakes. But we also had to deal with unintentional mistakes.
Automation can help prevent recurring, minor errors. We built (and sell) our own tool to automate reports, like the example below for CPAs:
We got rid of hundreds of unintentional mistakes and the never-ending burden of fixing those mistakes; boosted our performance and total efficiency; and saved loads of time for creative tasks.
Decreasing unintentional negligence
Client tasks take approximately half our analysts’ time. Even so, sometimes something goes wrong, and answers to important emails from clients are delayed beyond the four-hour commitment in our SLA.
This dashboard helped us monitor analyst adherence to our SLA commitments:
When we recognized that the percentage of responses within four hours wasn’t perfect, we created notifications in Slack to serve as reminders.
To activate a reminder, an analyst sent a status (described below) to a separate email account without copying the client. Here’s the list of statuses we developed for the system of reminders:
Our analysts got notifications in Slack if the SLA time for a response was almost over, or if they had promised to write an email “tomorrow”:
Personal development plans
When an analyst created a task in Targetprocess, they estimated the time needed based on their previous experience (“effort”). Once they’d finished the task, they entered how much time was actually spent.
Comparing these two values helps us find growth zones and define the difficulty of execution:
For example, suppose an analyst spent much more time than average on a task with the Firebase tag. If that’s caused by low technical knowledge, we’ll add Firebase to their personal development plan.
By analyzing analysts’ efficiency on the individual level—while focusing on the educational opportunity—we solved our problem of tarring all analysts with the same brush for development plans.
Now, each specialist had an exceptionally relevant step-by-step guide for self-improvement to help our specialists grow faster.
We still have some questions to dig into in our department. Launching analytics for a real-life team is an iterative process.
Where will we go next? Fortunately, we have strong analytical instruments in our hands to help not only our clients but also ourselves. As you look at your situation, here are key takeaways:
The sooner, the better. Collecting, merging, and preparing data is about 75% of your efforts. Make sure that you trust the quality of the datayou’re collecting.
Start with an MVP dashboard. Focus on critical KPIs. Pick no more than 10 metrics.
Define what you’re going to do if a metric changes dramatically at 5 p.m. on Friday. You should have a planif a metric rises or falls unexpectedly. If you have no idea why you should have such a plan for a certain metric, think over whether you need to track it at all.
An average is just an average. Look at the extremes. Challenge the average when it comes to managing and developing people.
Use transparent and easily explained algorithms. Make sure your team understands the logic behind the algorithms and is okay with it, especially if KPIs influence compensation.
It’s easier to automate tracking than to make people log time. But you shouldn’t make it look like you’re spying on the people working for you. Discuss all your tools and steps for improvement with the team.