Loading...

Follow CXL | Conversion Optimization Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Lurking beneath every goal are dangerous assumptions. The longer those assumptions remain unexamined, the greater the risk.

– Jake Knapp, Sprint: How To Solve Big Problems and Test New Ideas in Just 5 days

Imagine this scenario. You’re a marketer, and you’ve just launched a marketing campaign that you spent weeks or months building. You checked all your boxes:

  • You assigned roles and responsibilities.
  • You kept stakeholders informed along the way.
  • You activated all the right channels to reach your target segment.

But something is wrong. Hardly any prospects are opening your emails. Almost none are engaging with your ads. The only feedback you are getting is that certain elements on your landing page are broken and, worse, don’t load properly across devices and browsers. 

Your boss calls you into their office and asks: “What happened?”

The wrong answers would be:

  • “I just assumed prospects would open my emails.” 
  • “I assumed the team QA’ed the landing page.” 

Instead, the right answer is: “I’m going to find out where my assumptions led me wrong.”

In this post, I’ll walk you through a rigorous project management process to help you optimize your campaign strategy. Taking lessons from agile project management (specifically: sprints), I’ll show you how to build more effective, less assumptive, marketing campaigns.

Adapting sprints for marketing (Image source)

Brands like 23andme and Slack have adopted the Google Venture design sprint because it works. For businesses aligned with the whole “lean startup” movement, the sprint offers a formula to quickly build, launch, and test products before committing too much time and effort to something that might not resonate in the market.

It can be easily applied to marketing because it’s built around making data-driven, research-backed decisions, which are critical to creating winning campaigns.

And even though Jake Knapp explicitly advises not to adapt the Sprint—I’ve done it anyway. Over the years, I’ve made small tweaks to suit the specific goals and needs of a marketing team and marketing campaigns. 

Here’s what my sprint looks like:

This slide comes straight from my upcoming course on marketing project management.

Sprints traditionally happen during a five-day timeframe, when product teams set aside everything else they’re working on. 

In my world, I don’t do that. Sometimes, our team will spend one week on a marketing sprint; other times it might take six. That’s the nature of shipping marketing campaigns—the sprint is a framework, not a mandate, for guiding our work.

Marketing sprint phases and goals:

Although I’ve adapted the framework and timeframe, the goals during each phase of the sprint remain true to the process:

  1. Map. Set your targets and objectives for what you want to accomplish based on feedback and research.
  2. Sketch. Ideate and pitch ideas for achieving your marketing goal.
  3. Decide. Vote and decide on your campaign content, channels, and tactics based on ideas pitched in Phase 2.
  4. Prototype. Build just enough of the campaign to get it ready for testing.
  5. Test. Get feedback on every aspect of the campaign so you can go back, make changes, and launch.

Now let’s look at what’s done in each phase to achieve those goals.

Phase 1: Map

The Map phase is all about research, collecting data, and understanding the problem to set clear, measurable, marketing targets. 

The first thing you need to know is your objective. Is this a lead-gen campaign or a campaign to push a new affiliate program? Is it a nurture track to convert leads into paying customers or a brand play to increase awareness?

Once you understand the goal, you can move on to the research—how you’ll achieve it and the specific targets and metrics that indicate “success.”

There are so many ways you can collect data and do research. (In fact, CXL Agency has their own rigorous research process.) As a guide, I’d recommend a mix of primary and secondary research to inform how you set your targets, such as:

For example, for a recent campaign to launch new pricing at CXL Institute, we conducted a series of industry interviews with pricing experts, ran an average revenue per user (ARPU) projection analysis for new plans, and went through a suite of usability tests on pricing mockup designs and copy.

One piece of feedback from usability tests was that users were confused by the “Pay once” option in our pricing. Users didn’t understand if it was an annual payment or if they would keep the product for life.

The feedback triggered us to add a small disclaimer to our pricing block that made it clear that they would keep the course forever:

Phase 2: Sketch

Once you’ve collected all your research and set your targets, you’re ready to jump into the Sketch phase. This is where you put all your creative marketing ideas to work.

The first portion of this phase analyzes the research and comes prepped to an “ideation” pitch meeting with a fully baked campaign plan. Channels, content, messaging—it should all be there (or at least a skeleton of it).

A fully baked campaign prevents the meeting from turning into a mishmash of half-baked ideas that sound cool but might not make sense for the project. Often, asking for a full campaign plan leads team members to think of more complex, interesting ways of solving the problem.

It also helps marketers think more about connecting the dots across channels and assets since they’ve had to plan for it upfront. Here’s an example of what a campaign ideation pitch looks like, from a campaign I ran in my previous role at Unbounce:

Mocked up using Miro.

Each person explains their campaign to the team, and each person votes on individual ideas, concepts, or tactics from each campaign.

I usually give people three votes and one “super vote” (worth two votes). After voting, you’re ready to move into Phase 3.

Phase 3: Decide

In this phase, you collect all the top-voted ideas, organize them, and come up with your campaign plan.

Here’s where a decision-making framework like DACI comes in handy. In the DACI framework, you assign specific roles:

  • Driver. The person(s) responsible for leading the project and corralling all stakeholders.
  • Approver. The person who ultimately makes the decision.
  • Contributor. The person(s) with subject-matter expertise.
  • Informed. The person(s) who’ll be kept in the loop on how decisions are made.

In this phase, the DACI framework is especially handy because you need one person—the Approver—to decide how all the voted ideas come together into a plan. 

The Approver then comes back to the team and presents the plan to move forward, assigns roles to the Drivers, and pushes the campaign into the next phase.

Phase 4: Prototype

If you remember one thing about the Prototype phase, it should be this: Build just enough. Knapp outlines a four-step list of what he calls the “Prototype Mindset” in his book, and it goes as follows:

  1. You can prototype anything.
  2. Prototypes are disposable.
  3. Build just enough to learn, but not more.
  4. The prototype must appear real.
The prototype mindset from Sprint: How to solve big problems and test new ideas in just 5 days.

As a (self-aware) perfectionist, I can’t tell you how many times I’ve been in the Prototype phase and wanted to just spend a little extra time polishing a landing page, ad, or email. Resist the urge.

Prototype of a fake ad for launching Unbounce popups, created with stock imagery and quickly mocked up in Photoshop.

The whole point of this phase is to build only what you need to get an authentic answer from a potential user in the next phase: Test.

Your aim is to move through the Prototype phase quickly so that you can actually learn (and improve) based on real feedback. Plus, the more time you waste making something perfect, the more frustrating it’s going to be if when you have to change it later.

Phase 5: Test

Congrats! You’ve made it to the final phase—where the real magic happens. During the test phase, you get user feedback on the prototypes you’ve built.

First, conduct a series of interviews (ideally with your customers). According to Knapp, conducting at least five interviews during a sprint is enough to get real insight. Any less and you might be operating on false information.

Real screenshot of prototype testing.

Ask all interviewees the same questions. You’re looking to discover:

  • Are they interacting with the prototype the way you intend? For example, if you want them to hover over a tool-tip on your landing page to discover more info, are they doing that?
  • Is their reaction positive or negative? For example, is your messaging resonating with them? If you added a joke to your email copy, did they get it? Did they laugh?
  • Are they motivated to complete the action? For example, are they finding and clicking the call to action? Is the offer something they seem enticed by?

After you conduct interviews, transform feedback into “How might we” statements. Originally an idea defined by Proctor and Gamble in the 1970s, the basis of “How might we” is to rephrase every piece of feedback (positive, negative, neutral) into a question that incites action.

For example, say you’re testing an email in a nurture campaign to convert leads into customers. A piece of feedback you might receive is: “Get to the point faster, I skim emails.”

Your role is to transform that feedback into a question: “How might we accommodate people who skim emails?”

The benefit of this technique is that it doesn’t immediately present a solution, empowering you and your team to come up with the best answer. For example, you could solve for skim readers in a few ways:

  • Reduce the amount of copy in the email. 
  • Use bolding and bulleting to break it up and call attention to the main points.
  • Reorder copy so the main call to action and thesis is at the top.

Once you’ve transformed your feedback into action items, you need to prioritize. Often, you’ll get a ton of feedback, and you need to decide which feedback to put into action. Sometimes, you might not have enough time to do it all, and that’s okay. 

Prioritizing feedback should be based on:

  1. How important it is to the campaign’s success. If something’s broken, you need to fix it.
  2. How often that piece of feedback came up. If everyone said they didn’t understand the headline, you probably need to rewrite it.
An example feedback prioritization sheet from an Unbounce campaign.

From there, you’re armed and ready with a tested campaign that you can remix, fix, and—most importantly—launch! 

Conclusion

Sprints are an effective and helpful project management process that you can apply to any and every marketing campaign. They ensure your work is data-driven and research-backed.

Ideally, sprints aren’t a one-and-done experience, either. A sprint lets you observe a campaign in the wild and, if it’s not hitting your targets, make tweaks and changes until it does.

If you want to learn more project management tools, techniques, and processes, check out my course at CXL Institute on project management for marketers, launching August 5. I’ll be covering the sprint process further, as well as walking you through how to iterate from annual to quarterly, monthly, and weekly planning so that your marketing team is set up for success.

The post Marketing Project Management: A Reliable, Reusable Framework appeared first on CXL.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“Getting great results” and “creating great reports” are very different skill sets. If you’re like most marketers, you’d rather sharpen your subject-matter expertise than spend time in PowerPoint.

The result is that reporting becomes an afterthought rather than an opportunity—a “necessary evil” with imperfect solutions:

  • Manual reporting is too time-consuming, but it’s been the only way to report on the right platforms with the right analysis.
  • Automated dashboard reports save time but bring limited functionality and don’t help clients understand the story behind the scorecards.

Fortunately, Google Data Studio can automate the time-intensive tasks of data compilation and report building without sacrificing important context and insights.

While Data Studio gives you an ideal platform for report creation, there’s a final step to transform data into a story that drives your clients to decision and action (such as pivoting strategy, approving new resources, or simply choosing to retain your services). That step is not so easily automated.

So before you start building your Data Studio report, make sure you know what to include—and what to leave out—to create a compelling client report.

Clients need stories, not just data

In data-driven industries, it’s easy to imagine that we can “let the data decide,” but that’s actually not the function of data. It’s our job to help our clients interpret the data so they can approve recommendations and take action.

(Image source)

While dashboards and data snapshots bring value to marketers and analysts, they’re usually insufficient for clients. A Deloitte Canada study revealed that 82% of CMOs surveyed felt unqualified to interpret consumer analytics data.

As Google’s Digital Marketing Evangelist Avinash Kaushik explains:

People who are receiving the summarized snapshot top-lined have zero capacity to understand the complexity, will never actually do analysis and hence are in no position to know what to do with the summarized snapshot they see.

To build useful reports, we need to move beyond simply summarizing performance with quick charts. We need to help clients understand the story.

The benefits of data storytelling

If “storytelling with data” sounds both vague and intimidating, you’re not alone. Storytelling evokes ideas of creativity and even fiction, a sharp contrast to the left-brain data and analysis tools we’re accustomed to using.

Telling a story in a report doesn’t require a cast of characters, anecdotes, or plotlines. Essentially, you need to follow the same UX advice you’ve been giving your clients for decades: don’t make them think.

Your readers need more than features (facts and figures) to take action. Story provides context so that they understand where to focus their attention. Storytelling also heightens emotions, which is vital because decision-making is driven by emotions, not logic.

Make your data storytelling emotional

The words emotion and motivation are derived from similar Latin roots. The more your clients can feel something, the more motivated they’ll be to act.

Marketers may be tempted to highlight wins and gloss over losses in reports to nudge their clients to feel joy (or at least satisfaction). But this strategy can backfire. 

Your clients need to know about what’s not going well—even more than they need to know about what’s working. Due to what’s known as attentional bias, we’re wired to pay attention to perceived risk, and generally to ignore status-quo.

People also respond differently to winning and losing, and losses are twice as powerful compared to equivalent gains. When your clients can see and experience a loss, you place them in a highly motivated state to take necessary action and, if necessary, change course.

To illustrate, let’s say you were responsible for driving 4,000 net new email subscribers each month. You’re hitting the goal, but the steady increase in list size isn’t growing revenue—a fact that’s been overlooked and gone unreported.

By drawing attention to the discrepancy with a visualization, you can drive a discussion that wouldn’t be possible if you focused only on list growth. 

With this new (alarming) information, you can revisit targets, value per subscriber, or changes needed for lead nurturing and sales.

With client work, there’s always a temptation to default to “everything’s sunny all the time” reporting. But those reports do a disservice to the client and the agency, even if they are more comfortable to deliver.

Transparency about actual market conditions, threats, and challenges are catalysts for real improvement. If not examined, nothing changes.

3 key elements of data stories

Analytics evangelist Brent Dykes says that storytelling with data needs three elements to drive change: data, visuals, and narrative.

When all three elements work together in your report, you reduce the cognitive load placed on clients, helping them easily identify and process the story. Reports that showcase only raw data are insufficient but are still used surprisingly often.

Adding charts and graphs can help with comprehension, especially when they employ good design principles. Our brains process high-contrast images subconsciously (before we can make sense of the data). These visual properties, known as preattentive attributes, include:

  • Form;
  • Color;
  • Movement;
  • Spatial positioning.

When you apply preattentive attributes to chart creation, you help your reader find the story more clearly and quickly. Notice the impact of adjusting weight and color in these Data Studio line charts:

Narrative is the final key element for story, but it’s often missing from reports—making it difficult for clients to understand and engage with the data. Narrative provides context for your readers; it’s the answer to the question, “What am I looking at?”

Journalists begin their stories with fast facts: who, what, where, when, and why. This style, known as the inverted pyramid structure, puts the most important information first and gives supporting details further in the story.

Readers are accustomed to this style, and they assume that the earliest information is the most important.

  • On a macro level, the report should begin with account performance before diving into supporting details.
  • On a micro level, each section or chart should lead with priority metrics, or KPIs, followed by secondary metrics. 

Many reporting tools lead with secondary metrics, which measure “what must not be broken” (instead of “what needs to be fixed”). That focus can encourage clients to overweight metrics that you shouldn’t optimize. Always start with the big picture.

What your reports should contain

Before we explore what specific information “clients” need from report deliverables, we have to address the fact that businesses, roles, and people aren’t all the same. A small business owner has different priorities than a CMO. Some clients want to see all the data, others just want the phone to ring.

Keep your specific client in mind as you build out your report, because details that satiate one person can overwhelm another. That said, there are three universal guidelines that will make all your reporting better, no matter the end user. 

Your report should include:

1. What happened

Your clients hired you to solve problems, so your report should address those problems, and the progress made in solving them. This starts with basic benchmarks:

  • What are the KPIs, and were targets met?
  • How are we performing compared to previous periods? 

As mentioned above, it’s not the job of a report to showcase only the wins. Be consistent with your key metrics (don’t cherry pick flattering stats), and make it as easy as possible for readers to interpret the data you’re sharing with them. 

2. Why it happened

Once your clients know what happened, they’ll want to know why. Sometimes, changes are due simply to natural variance, but you’ll want to document causal factors:

  • External changes. Your report can reveal changes to the competitive landscape, document the impact of seasonality and news cycles, and illustrate the effect of algorithm updates. 
  • Internal changes. Note if there were changes to marketing efforts (whether on- or offline), page content, site speed, availability of inventory, offers or promotions, or pricing. Also document if tracking changed or went down. 
  • Your team’s involvement. Show progress made on tasks, including implementation and production. Note how your team helped accomplish wins or mitigate losses.

Clients want to see the return on their investment in your team. And according to the labor illusion effect, they’re happier when they feel like you’re working hard for them—whether or not that work affects the outcome. 

3. What should happen next

Just as it can be hard for novices to tease out benefits and outcomes in product copy, it can be challenging to write recommendations in reporting (e.g. “Your tracking is broken. So as a next step…we recommend you fix it.”)

The purpose of next steps isn’t necessarily to introduce groundbreaking ideas or plans but to create a clear path forward. What may feel redundant or obvious to you can provide needed specificity to your client that increases the likelihood they’ll take action. If performance isn’t meeting expectations, it’s especially important to provide recommendations that address the shortcoming.

When writing next steps, use the active voice and assign responsibility wherever possible. “This discovery should be investigated further” does not help your clients know what to do, or who should do it. “Client to provide updated content roadmap by August 15” does.

Now that you know how to tell a story and what to include in your report, it’s time to create it in Google Data Studio.

Create your report in Google Data Studio 1. Start a new report > Choose a template

After logging in to Data Studio with a Google account, your first step will be to create a new report.

You can choose a blank report or avoid “blank page syndrome” by beginning with a Data Studio template. Templates are available within the platform or from the Report Gallery. (Many marketing teams have published their own.)  

As a reminder, don’t be fooled by the apparent convenience of templates; Even the best ones are still tools, not client deliverables. You’ll need to spend time strategically customizing whichever template you choose to transform it from a one-size-fits-all dashboard into a report that provides value for your client.

2. Connect your data sources

Data Studio makes it easy to connect directly to your data source(s). You can currently select from 18 Google connectors built and supported by Data Studio, such as Google Analytics, Google Ads, and YouTube Analytics. You can also upload your data via CSV or access it through Google Sheets or BigQuery.

Here’s a quick walk through of how to add a data source:

If you run into limitations accessing data sets or fields through Google products, you can choose from 141 partner connectors and 22 open-source connectors, with more connections being regularly added. 

Because you can connect to multiple sources in a single report, you don’t need to prepare or curate your data sources before connecting. Individual charts in Data Studio each use a single data source by default, but you can use shared values (join keys) to create blended data of up to four other sources.

Once your data sources are connected, you can begin formatting the presentation of your report.

3. Create impactful visualizations

Visualizations can increase your reader’s understanding of the data on both conscious and subconscious levels. The more clarity your charts provide, the easier the story is to interpret.

Choose the best chart for your data

Adding a chart in Data Studio is an easy dropdown. Selecting the right chart visualization takes some thought.

Be sure that each chart adds meaning to your report; don’t compare metrics or create segments just because you can. Your clients will seek patterns and meaning even where they don’t exist, and it’s far easier to omit useless charts than to explain why a perceived trend is actually just noise.

That said, you can create multiple charts to increase comprehension. By grouping distinct charts (rather than relying on viewer-enabled date-range or data controls), you clarify relationships, composition, and trends without requiring clients to conduct discovery and draw their own conclusions.

Following the inverted pyramid guidelines, you can show high-level, aggregate performance with one chart, and break out performance with another. Or, use side-by-side time series charts to “zoom in” on recent performance and “zoom out” on trends over time, giving your client at-a-glance context. 

If you’re unsure of which visualization to use, chart selection tools (like the one below from chartlr) can help you choose the best chart types for your data and objectives. 

Enhance charts with preattentive attributes

When charts are busy or cluttered, add contrast to clarify the story. Edward Tufte’s Data-Ink ratio suggests minimizing the amount of non-essential “ink” used in data presentation.

Data Studio makes it easy to adjust color, weight, and scale (as well as grids and axes) to create contrast and emphasis. This is handled in Data Studio’s Style panel, where each metric series is individually controlled:

Data Studio also has some built-in visualizations to help with quick data interpretation, such as the red and green time comparison arrows found in scorecards and tables.

Be sure to review whether the colors correlate with positive or negative change for the metric. “Green is up” works great for site visits, but a CPC increase with a green arrow is confusing for readers. You can override the default settings in the Style panel.

4. Provide narrative and context

Narrative creates a sense of setting, time, and place for your reader and connects concepts, ideas, and events.

Help your readers make sense of data and visualizations by clarifying relationships and including background information they may not immediately know or remember. 

Establish hierarchy and organization

As with any document, a consistent layout and hierarchy in your report orients your readers. Data Studio is not a word processor, and it doesn’t apply style sheets or standardized formatting the same way. 

You can control layout and theme properties to provide a consistent look and feel. As you create (or duplicate) pages on the fly, pay attention to heading areas, positioning, text, and font size. 

You can apply elements to all pages by selecting them and making them report-level. 

Group thematically similar charts and data together. It helps to have one main idea per page (or slide) to reduce the amount of information your reader processes at one time.

Leverage headings and microcopy

Headings are good; better headings are better. Again, your goal with headings is more to orient your reader than to repeat what they’re about to read.

Microcopy gives your readers additional context about a page element, and it’s extremely easy to add to your Data Studio report.

You can use microcopy to spell out acronyms, provide definitions and annotations, cite targets and objectives, or otherwise reduce friction for your clients as they work to understand the data.  

In this Data Studio template screenshot, the heading, microcopy, and metric labels all repeat each other. This is fine for a template but would add little value for clients.

With just a bit of customization, each text element serves a purpose. (Note that the metrics have also been re-ordered to lead with KPIs.)

Add context with chart deep dives

Context and text are not synonyms; context doesn’t have to be lengthy sentences—and doesn’t even need to be words at all.

The “why” of what happened is..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Best practices are starting points: If you have no data, start with these. They are not what you should end up with, but they’re often where optimization begins. That’s an important distinction.

This post applies Jakob Nielson’s 10 Usability Heuristics to B2B websites that focus on lead generation (as well as “high consideration” B2C sites that lack any transactional functionality).

Usability heuristics are “best practices” for user interface design. When applied to your site, these tenets help reduce friction and keep buyers focused on your message—rather than distracting or confusing them with a deficient or incomplete interface.

B2B websites often have to explain a lot to get buyers to convert. The higher the value of what you’re selling, the higher the inherent friction; therefore, the more questions you’ll need to answer throughout your site.

In these situations, information architecture is much more complex. Some B2B firms are notorious for ignoring this reality because, they argue, “we don’t actually sell anything in our website.” That argument rarely holds up.

When redesigning a website, especially if it’s a radical redesign, these heuristics are a north star—reliable criteria to decide between alternative page designs, functions, or ways to layer your information. For each of Nielsen’s 10 heuristics, we provide commentary and examples for B2B site designs.

1. Make your system status highly visible

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

Jakob Nielsen

Translation for B2B websites: Always tell your buyers where they are when navigating your site.  You can achieve this through the use of:

  • Breadcrumbs. Breadcrumb navigation works like a GPS, telling your buyers where they are on your website at all times. Plus, your buyers have a path laid out that tells them how they got there.  Use breadcrumb navigation on your site, whether that’s location-, attribution-, or path-based.
  • Page headers. The page header should resemble the copy of the navigation items or links. This is a good practice not only for SEO but also user experience. If the page header matches what the user clicked, the buyer will be reassured.
  • Highlighting selected menu options. When you click on a navigation item, keep it highlighted, bolded, or underlined, so that your buyer gets instant feedback about menu options.
  • Show progress bars. Include page-load indicators during page load. If buyers are trying to load a calculator widget or process a request, then a progress bar or notification of some sort let’s them know what’s happening.
  • Thank you pages. Thank you pages are great indicators of current status.  If your buyer downloads an ebook or signs up for a webinar, the Thank You page confirms the action that was just taken.

If you skip these elements, your site will confuse buyers, who will wonder where they are—a completely unnecessary friction point.  Make your navigation clear.

The Berkshire company does it right, providing their buyers feedback about exactly where they are while navigating their site, and the breadcrumbs tell them how they got there:

In the case of MSC Industrial Supply Company, buyers who want to view their specialty brochures see a rotating progress wheel that indicates what percentage of the brochure is loaded:

Above all, remember that good websites must answer buyers’ questions before they think to ask them.  If buyers get distracted trying to navigate your website or are left wondering if something is happening after a click, they’ll get frustrated.

Take this homepage as an example:

All good there, right?

Well, now look at this interior page and try to guess which section of the site was clicked, or if you can tell where in the site you are:

Nope. You have no idea where you are.

Here’s an example of how a thank you page can act as a system status confirmation:

2. Match your system to the real world

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

Jakob Nielsen

Translation for B2B websites: Use phrases and words that your buyers are already thinking about. Eliminate jargon. Your buyers must come across words, phrases, and concepts that are familiar to them.

The best language and tone of voice should come directly from your buyers mouths. Spend time talking to your customers:

  • Ask them to tell you what your product means to them;
  • What problem it solves;
  • How their life was before using it.

Techport 13 does it right. Their value proposition explains what they do in very simple language:

Their buyers literally use terms like:

  • “roll out new software”;
  • “train the team”;
  • “do customizations and enhancements”;
  • “make IT run smoothly.”

This was intentional and based on research. In sharp contrast, look at the language from their homepage before their website redesign:

Their clients didn’t use terms such as:

  • “Our resources, products and proven strategies”;
  • “attain the most of”;
  • “…our employees have been expertly applying their wealth of experience to our clients”;
  • “a diverse set of client experiences…”

You get my point.

3. Allow user control and freedom

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

Jakob Nielsen

Translation for B2B websites: Eliminate anything that takes control out of the user’s hands.  Here are three examples in which this heuristic is commonly violated in web design:

Pop-up offers

We’ve all visited websites where a pop-up window suddenly appeared and asked us to join an email list or take a survey. While intrusive pop-up windows like these are annoying (pop-ups do work when done right), they’re worse if your buyer can’t reject them.

If you want to collect feedback, give users 100% control. Let them reject your offer. This will actually increase the quality of your surveys: Those who opt in are more likely to be honest and genuine.

Grainger is an industrial supply company. On their website, buyers clearly see the helpful “No Thanks” microcopy, right underneath the big, red “Join” call-to-action button. Your buyers will appreciate this thoughtful feature because it adds to their user experience.

On the Design&Function blog, a slide-in call to action shows up only after users begin scrolling. The ebook offer is also collapsible, so the buyer can reduce its visibility or come back to it later:

Autoplay videos

Another pet peeve for buyers is a website that autoplays a video. This can be a nuisance, especially if it defaults to sound On. Don’t assume what you’re site users will want to do—let them decide when (if ever) to play your video. Video content should still be supplemental to text information.

Here’s how Square does it:

Automatic Carousels

Another example of loss of control that causes anxiety and frustration are automatic sliding banners. In addition to causing frustration, CXL research has demonstrated that automatic carousels don’t work.

Instead of using this distracting element, layer information in a way that makes it easy for buyers to discover and explore with full control over their experience.

Here’s an example of a better way. WSI franchise uses tabs to walk the user through related content:

4. Use consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

Jakob Nielsen

Translation for B2B websites: The last thing you should subject your buyers to is a sense of confusion. They shouldn’t wonder if words, situations, or actions really mean the same thing. Websites are not puzzles. Create fluid experiences that eliminate guesswork.

Massey Ferguson’s website, a leader in tractors and global harvesting, exemplifies consistency. On every page, whether the homepage or a product page, buyers see the same use of white space, a clean layout, and a well-organized information hierarchy.

This keeps buyers calm and makes it easy to scan the site quickly for important information. Consistency and conventions make your website “learnable,” and that’s a good thing—it will appear easier to use.

Another example: Throughout the Sprint Business site, you see the same elements in the navigation bar that make it easy for buyers to know where they are. In addition, the same drop-down menu appears in the layered navigation from every page accessed via navigation bar.

Compare that to Georgia-Pacific.  They’re a huge corporation, yet the experience on related brands is different—navigation styles and standards change. This isn’t a unified experience and may cause confusion for some B2B buyers.

Now, one could argue that because the company is so large and the markets so varied, this is a lesser issue than if it occurred on the same site (or with the same pool of buyers).  In the worst-case scenario, these designs would:

  1. Force buyers to adapt to different interfaces on a different section or microsite;
  2. Cause some buyers to think that they had actually left your main site.
5. Prevent errors

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

Jakob Nielsen

Translation for B2B websites: The best defense against errors is to avoid them in the first place. When you design carefully and mindfully for the user experience, errors don’t pop up much. This may require several iterations of usability testing and improvements to your site.

Here are five errors that are easily preventable:

Typing the wrong info in a web form

This is accentuated when, in an effort to make forms clean and sleek, field names are placed inside the fields themselves. Once the buyer clicks on the field, they need to remember what’s supposed to be there.

Make it easy on your buyers. Don’t stretch their short-term memory; put field names outside of the form.

Causing users to mistakenly perceive a “wrong” click

When a call-to-action button doesn’t resemble landing page language (has no “scent“), it causes doubt or frustration.  Take this example:

The landing page has nothing to do with the expectation the call to action created; hence, it’s easy for the user to think they made an error.

Let’s look at Simply Business’ flow to invite users to request insurance quotes.  First, this is the call-to-action in their homepage:

And this is the landing page where you land:

There are good things and bad things here. Two stand out:

  1. Good: There’s helpful explainer text for users if they have a question about the form.
  2. Bad: The headline doesn’t provide a message match to the call to action on the homepage. (It doesn’t provide any useful information at all.)
Failing to include autocomplete for site search

Search boxes are other places where users make common mistakes. The “auto recommendation” feature can work wonders for users. Take Google, for example. Every time you enter a short- or long-tail keyword, Autocomplete shows matches to..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Links to blog posts or long-form resources increase their search visibility and build awareness. They also help sites rank for bottom-of-funnel terms—a rising tide lifts all boats.

Some content marketers have it “easy,” working in highly visual industries (e.g. food, fashion) with wide appeal. That simplifies content creation and link building compared to, say, trying to promote niche B2B software.

Given the potential benefits and challenges in SaaS, who’s doing it well? To find out, I ran a study to benchmark content marketing performance for 500 SaaS companies.

While the initial research covered many elements—and focused mostly on numbers—this article reveals the strategies that led to successful link building.

Data and methodology

My initial research found that, on average, the top-performing articles by major SaaS companies generated backlinks from just 9 referring domains.

In this post, I focus on a subset of that data—55 articles that significantly outperformed the rest of the field. These articles generated three times the average number of backlinks (at least 27 referring domains).

I also filtered posts to include only those that contributed at least 5% of the total links to the site for pages other than the homepage.

  • This filtering helps control for site size—a post on NYTimes.com that earns 100 links isn’t noteworthy, whereas one on a personal blog that earns 20 may be exceptional.
  • Excluding the homepage is a quick way to remove a major, non-content outlier that would otherwise skew the total link count.

By assessing the strategies behind those 55 articles, I’ve identified five shared features. If you’re creating content for your SaaS company, these are the themes and practical ideas to add to your content calendar.

5 ways that SaaS content earns links 1. Become a point of reference.

Eight of the articles in this list (15%) are original research. There’s a mix of infographics and in-depth studies, all of which generate lots of inbound links.

According to a 2017 study, research is the most efficient type of content for earning backlinks. A 2018 study found that 74% of marketers who conducted original research saw increased website traffic as a result (though only 49% reported generating backlinks).

Example 1

What the Most Common Passwords of 2016 List Reveals [Research Study]” by Keeper

Word countBacklinksReferring domainsDate of publication
4833,100815 (39% of all RDs)2017
 

This hybrid blog post and infographic presents the results of Keeper’s study that assessed the most common passwords in 2016. (They sourced their data from recent data breaches.) The post now accounts for 39% of all referring domains to pages other than the homepage.

Why did it succeed?
  • Smartly sourced, hard-to-find data. Passwords, of course, are usually private. The study reveals rarely unearthed data that people are naturally curious about. Using data from a data breach as a source was also a clever way to conduct research quickly and cheaply.
  • Ranked answers. Humans love rankings. A random list of “common passwords” wouldn’t have had the same impact as the rank-order version. Does it matter which is fourth versus sixth? Nope. But it’s a more engaging way to frame the content. 
What could they have done better?
  • The visual presentation isn’t impressive. The main report is a clickable link to a PDF, which doesn’t make much sense. For one, many links and visits may go to the PDF version (rather than the HTML version, which includes more company info, navigation, calls to action, etc.). Second, a PNG would’ve made the pseudo-infographic embeddable on other sites to help earn even more links and referral traffic.
  • There’s no segmentation of data. In all likelihood, the data breach contained more than just passwords—it probably contained emails and, perhaps, addresses, too. Additional variables would enable additional reports (i.e. more pages to link to) or more targeted versions to increase the content’s appeal (e.g. “What are the most common passwords in Canada?”).
Example 2

New Study: 2018 PROMO Online Video Statistics and Trends” by PROMO

Word countBacklinksReferring domainsDate of publication
1,79226375 (10% of all RDs)2018

This is another example of a hybrid blog post and infographic. It presents the results of PROMO’s study looking at the video habits of 500 people across all age ranges, and the post now accounts for 10% of all referring domains to pages other than the homepage.

Why did it succeed?
  • Visuals for each statistic. In addition to the large infographic at the start of the article, PROMO created visuals for each statistic that can easily be embedded on other sites. This means there’s three different ways to link to this article: quoting statistics; embedding the infographic; and embedding images for certain statistics.
  • Video is a hot topic. For marketers, video is having its moment. This Google Trends chart shows how searches for video marketing have increased over the past five years, and the arrow indicates when this article was published—right before peak interest.
What could they have done better?
  • There’s no segmentation of data. This article looked at video viewing habits of people globally, from teenagers to seniors. Breaking down this data into smaller subsets might reveal additional insights (e.g. how video habits vary by age, gender, or region).
  • No extra statistics. All of the stats in the article are covered in the infographic, which is shown at the start of the post. While the article goes into more detail about each statistic, everything’s been covered before you get to the bulk of the copy. For users, there’s little motivation to spend time reading the article.
Takeaway: You can scale your research project to your budget.

The research studies I looked at varied in scope: PROMO assessed 500 people’s video habits; Keeper analysed 10 million passwords. As another example, Gong used AI to analyse 519,000 discovery calls to understand what drives success. All of these projects generated backlinks.

You don’t need an expansive study to generate links. Even small studies can build credibility for your company as an authoritative source on a topic.

The main drawback of research studies? They age. Over time, statistics become less relevant; the links you earn in 2019 could go to another, more recent study in 2020. If you invest in original research, consider a topic that:

  • Has enduring interest;
  • Is feasible to update annually.
2. Share others’ research.

An additional seven articles (13%) share other people’s research as infographics, statistical roundups, or text commentary.

You might assume that most people would link directly to the original research, but this shows that, for the purpose of earning links, aggregating research from multiple sources may be just as effective as doing your own.

Example 1

22 Mind-Blowing Mobile Payment Statistics” by BlueSnap

Word countBacklinksReferring domainsDate of publication
9045235 (7% of all RDs)2017

This article collates 22 statistics about mobile payments from 13 sources, covering security, users, market share, and global statistics. It accounts for 7% of all referring domains pointing to pages other than the homepage.

Why did it succeed?
  • Built credibility by referencing multiple studies. By bringing together relevant statistics from a number of sites, BlueSnap delivered a more credible resource—you don’t need to scour the web to find out if a single research study is corroborated (or debunked) by other studies. 
  • Took advantage of human laziness. If you’re writing about this topic, it’s much easier to link straight to this list three or four times than click through to each original source. For example, articles by ConversionFanatics and Fourth Source both reference statistics collated by BlueSnap, linking to this article instead of the original sources.
What could they have done better?
  • Visual presentation is uninspiring. This article is all text. There are no visual elements to add interest. Illustrating statistics would make them more shareable and be an additional incentive to link to their article instead of the original sources.
Example 2

10 Year-End Giving Statistics Every Fundraiser Should Know” by Neon

Word countBacklinksReferring domainsDate of publication
798196111 (17% of all RDs)2016

Similar to the article above, this one collates 10 statistics about year-end donations from various sources. It accounts for 17% of all referring domains aside from the homepage.

Why did it succeed?
  • Visuals for each statistic. With a custom image for each statistic, this article looks more like a piece of original research than a list of stats. These visuals can be embedded on other sites, providing another way of linking to this article beyond simple text.
  • Looks like the original source. Many websites cite Neon not just with a link but in the anchor text as well. Some go so far as to credit Neon (erroneously) as the source of the research:
What could they have done better?
  • More data. Ten statistics is a good starting point—it delivered lots of links for Neon, but it’s still a small number, and the data is growing older by the minute. (It was originally published in 2016.) Updated, expanded statistics could justify another round of promotion and keep the post current.
Takeaway: No research? No problem.

If you don’t have the resources to conduct original research, aggregating a list of reputable industry statistics might be the next best thing. You can reap all the benefits of original research—without actually doing any.

Statistical roundups are valued resources that build credibility, sometimes at the expense of those who ran the original research studies.

3. Make the news—for better or worse.

My SaaS content marketing study found that PR-style content received twice as much organic traffic as blogs that focused on educational content. That statistic won’t hold true for mom-and-pop shops. But it does work for big companies whose fortunes qualify as newsworthy events.

Some events that earned links were intentional—acquisitions and funding announcements, for example. Others, like security incidents, were less desirable but nonetheless impactful.

Example 1

Salesforce Signs Definitive Agreement to Acquire Datorama” by Datorama

Word countBacklinksReferring domainsDate of publication
2728951 (21% of all RDs)2018

A couple of acquisitions showed up in this data set, attracting coverage in tech press and the wider business press. For example, this article is Datorama’s announcement of their acquisition by Salesforce, and it accounts for 21% of referring domains to their site.

Why did it succeed?
  • Big name acquirer. With Salesforce as your acquirer, it automatically becomes big news in the world of sales and marketing, at least for a while. It helped that Salesforce linked to Datorama’s page in their article announcing the deal, bringing this piece to the attention of their wider audience.
  • Direct message from the CEO. Make no mistake: This “article” is a press release. But it’s authored and signed by Ran Sarig, Datorama CEO and co-founder, which makes it more interesting and engaging than a dry, anonymous press release. It almost comes off as an “opinion” piece, embedding the CEO’s take on the acquisition within the page and, thus, turning it into a source for journalists reporting on the acquisition.
What could they have done better?
  • Logos! This is another all-text article, with no featured image in the post or header. A visual that featured both the Datorama and Salesforce logos could have provided a visual way of communicating the acquisition—ideal for sharing on social media or earning image links.
Example 2

May 31, 2017 Security Incident (UPDATED June 8, 2017)” by OneLogin

Word countBacklinksReferring domainsDate of publication
646941395 (36% of all RDs)2017

“There’s no such thing as bad press.” Probably not. But, as far as earning links go, maybe so.

This article details a major security incident, covering the scale of the incident and the impact on customers and their data. This was big news and widely linked to—it accounts for more than one third (36%) of all referring domains, other than those pointing to the homepage.

Even if OneLogin wished they didn’t have to publish such an article, doing so gave them some control over the narrative—and earned plenty of links. (It was a good day for their SEO team, at least.) 

Why did it “succeed”?
  • It was national news. This incident made it onto the BBC website. For many companies, a data breach (hopefully) won’t make it into national news.
  • Ongoing updates kept it relevant. This security incident was reported on May 31, 2017, with updates shared on June 1 and finally on June 8. This meant affected customers (and news outlets) had one page to refer to for up-to-date information, as the incident was resolved.
What could they have done better?
  • Customize the design of the page. The audience for this page is unique—customers and news readers concerned about the incident. If they wanted to make the most of a bad situation, they could’ve devoted more page space and copy to talking about the company (in a positive light) and offered a more relevant call to action than “Sign up to receive a newsletter.” 
Takeaway: Self-promotional content can work.

Not all links come from educational content. It’s okay to write about your company, and a self-promotional focus can bring in tons of links if your company qualifies as newsworthy—or is acquired or connected to someone who is.

It’s tough to argue that “bad news” is an opportunity. But it does earn links. Quite often, those links come from powerful news outlets.

Whether the news is good (funding, acquisitions) or bad (data breach), here are 10 sites with decent domain ratings (60+) that linked back to articles in this subcategory:

  • B&T Magazine
  • Ad Exchanger
  • CIO Dive
  • CMSWire
  • Diginomica
  • MarTech Series
  • Mobile Marketing
  • The Drum
  • TechCrunch
  • CFO.com

Put these sites on your outreach list if you’re making news.

4. Go bigger or better.

The skyscraper technique is a popular strategy for building links. You find popular content about an important (read: high-volume) subject and create a better version.

There are four key ways to improve upon existing content:

  1. Length. If someone shares a list of 20 must-use tools, make a list of 25.
  2. Newness. Has someone published a roundup of top tips for 2018? Update it for 2019.
  3. Design. Is great information languishing on an ugly blog? Share the same information with stronger visuals or make the post easier to skim/navigate, especially on mobile.
  4. Depth. Go into more detail than the original post—turn a brief paragraph into a full section, substantiate claims with expert opinions or stats, etc.

The examples below focus on two methods: length and depth.

Example 1

100+ Best Software Testing Tools Reviewed (Research Done for You!)” by QA Symphony

Word countBacklinksReferring domainsDate of publication
14,29557080 (14% of all RDs)2016

This article is a great example of how to generate backlinks with a longer article. Totaling over 14,000 words, QA Symphony collates a list of more than 100 software testing tools. It accounts for 14% of all referring domains to their site.

Why did it succeed?
  • Clear, valuable comprehensiveness. This article is by far the most comprehensive list on the first three pages of search results for “best software testing tools.” It covers 100+ tools—more than twice as many as the next-longest list. Longer doesn’t always mean better, but it works for this topic.
What could they have done better?
  • Keep it updated. This article occupies the third organic spot for “best software testing tools.” Since its 2016 publication, it’s been beaten out by two newer (or at least more recently updated) articles—both of which reference 2019 in their page titles. Updating the title and adding a few new tools could push it to the top of the rankings and generate even more links.
Example 2

What Is Account-Based Marketing? An ABM Definition” by Terminus

Word countBacklinksReferring domainsDate of publication
121324779 (25% of all RDs)2016

This article from Terminus was published in November 2016 and has generated 247 backlinks from 79 referring domains. Even though it’s a few years old, it still ranks on the first page of Google for “what is abm.”

Why did it succeed?
  • Subheadings with related keywords. The subheadings in this article all contain other relevant keywords, helping this piece rank for lots of ABM-related searches, as well as making it easier for readers to scan and navigate. That pleases users and earns more “passive” links—citations that accumulate gradually from top rankings, not outreach. 
  • More in-depth than similar articles. I looked at other articles ranking for the same search term that were published between 2014 and 2016. For example, this article by Salesforce is shorter than the Terminus article (762 words vs. 1,213) and focuses almost entirely on what you do in account-based marketing, rather than how and why you’d do it. 
What could they have done better?
  • Keep it updated. One of the newer articles out-ranking the Terminus post is this article by HubSpot. Published in 2019, it also goes more in-depth, covering the same ground while adding a closing section that includes steps to launch an ABM campaign. 
Takeaway: The skyscraper technique works—if you have a plan.

If you’re looking to “steal” links from existing content, it’s not enough simply to create a post that’s longer/newer/better. To generate links for your new content, you need to invest time and effort in the outreach. To quote Ahrefs:

The key to successful execution of the Skyscraper Technique is email outreach. But instead of spamming every blogger you know, you reach out to those who have already linked to the specific content you improved upon. The idea is this: since they’ve already linked to a similar article, they are more likely to link to one that is better.

Something else to keep in mind: The initial bump you might get from outdoing others will only continue if you keep your content..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Heat maps are a popular conversion optimization tool, but are they really that useful?

It’s easy to say that they help you see what users are doing on your site. Sure, of course—but lots of other methods do that too, and perhaps with greater accuracy.

So what can heat maps answer?

What is a heat map?

Heat maps are visual representations of data. They were developed by Cormac Kinney in the mid-1990s to try to allow traders to beat financial markets.

In our context, they let us record and quantify what people do with their mouse or trackpad, then they display it in a visually appealing way.

“Heat maps” are actually a broad category that may include:

  1. Hover maps (mouse-movement tracking);
  2. Click maps;
  3. Attention maps;
  4. Scroll maps.

To make accurate inferences for any of the above heat-map types, you should have enough of a sample size per page/screen before you act on results. A good rule of thumb is 2,000–3,000 pageviews per design screen, and also per device (i.e. look at mobile and desktop separately). If the heat map is based on, say, 50 users, don’t trust the data.

Since there are a few different types of heat maps, let’s go over each and the value they offer.

1. Hover maps (Mouse-movement tracking)

When people say “heat map,” they often mean hover map. Hover maps show you areas where people have hovered over a page with their mouse cursor. The idea is that people look where they hover, and thus it shows how users read a web page.

Hover maps are modeled off a classic usability testing technique: eye tracking. While eye tracking is useful to understand how a user navigates a site, mouse tracking tends to fall short because of some stretched inferences.

The accuracy of mouse-cursor tracking is questionable. People might be looking at stuff that they don’t hover over. They may also hover over things that get very little attention—therefore, the heat map would be inaccurate. Maybe it’s accurate, maybe it’s not. How do you know? You don’t.

In 2010, Dr. Anne Aula, a Senior User Experience Researcher at Google, presented some disappointing findings about mouse tracking:

  • Only 6% of people showed some vertical correlation between mouse movement and eye tracking.
  • 19% of people showed some horizontal correlation between mouse movement and eye tracking.
  • 10% hovered over a link and then continued to read around the page looking at other things.

We typically ignore these types of heat maps. Even if you do look at it to see if it supports your suspicions, don’t put too much stock in it. Guy Redwood at Simple Usability is similarly skeptical about mouse tracking:

We’ve been running eye tracking studies for over 5 years now and can honestly say, from a user experience research perspective, there is no useful correlation between eye movements and mouse movements – apart from the obvious looking at where you are about to click.

If there was a correlation, we could immediately stop spending money on eye tracking equipment and just use our mouse tracking data from websites and usability sessions.

Hence why Peep calls these maps “a poor man’s eye-tracking tool.”

Without much overlap between what these maps show and what users do, it’s tough to infer any actual insights. You end up telling more stories to explain the images than actual truths. This blog post criticizing heat maps for soccer players’ movements puts it well:

“What do heat maps do? They give a vague impression of where a player went during the match. Well, I can get a vague impression of where a player went during a match by watching the game over the top of a newspaper.”

While some studies indicate higher correlations between gaze and cursor position, ask yourself if the possible insights are worth the risk of misleading data or encouraging confirmation bias in the analysis.

What about algorithm-generated heat maps?

Similarly, there are heat map tools that use an algorithm to analyze your user interface and generate a resulting visual. They take into account a variety of attributes: colors, contrast, visual hierarchy, size, etc. Are they trustworthy? Maybe. Here’s how Aura.org put it:

Visual Attention algorithms, where computer software “calculates” the visibility of the different elements within the image, are often sold as a cheaper alternative. But the same study by PRS, showed that the algorithms are not sensitive enough to detect differences between designs, and are particularly poor at predicting the visibility levels of on-pack claims and messaging.

(Note: PRS, the other study cited above, sells eye-tracking research services.)

While you shouldn’t fully place your trust in algorithmically generated maps, they’re not any less trustworthy than hover maps.

And, if you have lower traffic, algorithmic tools can give you some visual data for usability research, including instant results, which is cool. Some tools to check out:

Just because it’s “instant” doesn’t mean it’s magic. It’s a picture based on an algorithm—not actual user behavior.

2. Click Maps

Click maps show you a heat map comprised of aggregated click data. Blue means fewer clicks; warmer reds indicate more clicks; and the most clicks are bright white and yellow spots.

There’s a lot of communicative value in these maps. They help demonstrate the importance of optimization (especially to non-optimizers) and what is and isn’t working.

Does a big photo get lots of clicks but isn’t a link? You have two options:

  1. Make it into a link.
  2. Don’t make it look like a link.

It’s also easy to take in aggregate click data quickly and see broad trends. Just be careful to avoid convenient storytelling.

However, you can also see where people click in Google Analytics, which is generally preferable. If you’ve set up enhanced link attribution, the Google Analytics overlay is great. (Some people still prefer to see a visual click map).

And, if you go to Behavior > Site Content > All pages, and click on a URL, you can open up the Navigation Summary for any URL—where people came from and where they went afterward. Highly useful stuff.

3. Attention maps

An attention map is a heat map that shows you which areas of the page are viewed the most by the user’s browser, with full consideration of the horizontal and vertical scrolling activity.

They show which areas of the page have been viewed the most, taking into account how far users scroll and how long they spend on the page.

Peep considers attention maps more useful than other mouse-movement or click-based heat maps. Why? Because you can see if key pieces of information—text and visuals—are visible to almost all users. That makes it easier to design pages with the user in mind.

Here’s how Peep put it:

Peep Laja:

“What makes this useful is that it takes account different screen sizes and resolutions, and shows which part of the page has been viewed the most within the user’s browser. Understanding attention can help you assess the effectiveness of the page design, especially above the fold area.”

4. Scroll Maps

Scroll maps are heat maps that show how far people scroll down on a page. They can show you where users tend to drop off.

(Image source)

While scroll maps work for any length of page, they’re especially pertinent when designing long-form sales pages or longer landing pages.

Generally, the longer the page, the fewer people will make it all the way to the bottom. This is normal and helps you prioritize content: What’s must have? What’s just nice to have? Prioritize what you want people to pay attention to and put it higher.

Scroll maps can also help you tweak your design. If the scroll map shows abrupt color changes, people may not perceive a connection between two elements of your page (“logical ends”). These sharp drop-off points are hard to see in Google Analytics.

On longer landing pages, you might need to add navigation cues (e.g. a downward arrow) where the scrolling stops.

Bonus: User session replays

Session replays aren’t a type of heat map per se, but they are one of the most valuable bits that heat mapping tools offer.

User session replays allow you to record video sessions of people going through your site. It’s like user testing but without a script or audio. Also unlike user testing—in a positive way—is that people are risking actual money, so it can be more insightful.

Unlike heat maps, this is qualitative data. You’re trying to detect bottlenecks and usability issues. Where are people not able to complete actions? Where do they give up?

One of the best use cases for session replays is watching how people fill out forms. Though you could configure Event tracking for Google Analytics, it wouldn’t provide the level of insight as in user session replays.

Also, if you have a page that’s performing badly and you don’t know why, user session replays may identify problems. You can also see how fast users read, scroll, etc.

Analyzing them is, of course, timely. We spend half a day watching videos for a new client site. And after looking at hundreds (thousands?) of heat maps and reviewing other studies, we’ve identified some recurring takeaways from heat maps of all kinds.

19 things we’ve learned from heat-map tests

We’ve looked at a lot of heat maps over the years. So have other researchers. And while every site is different (our perpetual caveat), there are some general takeaways.

You should test the validity of these learnings on your site, but, at the very least, these generalized “truths” should give you an idea of what you can expect to learn from a heat map.

1. The content that’s most important to your visitors’ goals should be at the top of the page.

People do scroll, but their attention span is short. This study found that a visitor’s viewing time of the page decreases sharply when they go below the fold. User viewing time was distributed as follows:

Above the fold: 80.3%
Below the fold: 19.7%

The material that’s most important to your business goals should be above the fold.

In the same study, viewing time increased significantly at the very bottom of the webpage, which means that a visitor’s attention goes up again at the bottom of the page. Inserting a good call to action there can drive up conversions.

You should also remember the recency effect, which states that the last thing a person sees will stay on their minds longer. Craft the end of your pages carefully.

2. When in a hurry, what sticks out gets chosen.

A Caltech neuroscience study showed that at “rapid decision speeds” (when in a rush or when distracted), visual impact influences choices more than consumer preferences do.

When visitors are in a hurry, they’ll think less about their preferences and make choices based on what they notice most. This bias gets stronger the more distracted a person is and is particularly strong when a person doesn’t have a strong preference to begin with.

If the visual impact of a product can override consumer preferences—especially in a time-sensitive and distracting environment like online shopping—then strategic changes to a website’s design can seriously shift visitor attention.

3. People spend more time looking at the left side of your page.

Several studies have found that the left side of the website gets a bigger part of your visitors’ attention. The left side is also looked at first. There are always exceptions, but keeping the left side in mind first is a good starting point. Display your most important information there, like your value proposition.

This study found that the left side of the website received 69% of the viewing time—People spent more than twice as much time looking at the left side of the page compared to the right.

4. People read your content in an F-shaped pattern.

This study found that people tend to read text content in an F-shaped pattern. What does that mean? It means that people skim, and that their main attention goes to the start of the text. They read the most important headlines and subheadlines, but read the rest of the text selectively.

Your first two paragraphs need to state the most important information. Use subheadings, bullet points, and paragraphs to make the rest of your content more readable.

Note that the F-pattern style does not hold true when browsing a picture-based web page, as is evident in this study. People tend to browse image-based web pages horizontally.

5. Don’t lose money through banner blindness.

Banner blindness happens when your visitor subconsciously or consciously ignores a part of your webpage because it looks like advertising. Visitors almost never pay attention to anything that looks like an advertisement.

This study found no fixations within advertisements. If people need to get information fast, they’ll ignore advertising—and vice versa. If they’re completely focused on a story, they won’t look away from the content.

There are several ways to avoid creating banner blindness on your website. Most problems can be prevented by using a web design company that’s experienced in online marketing.

6. When using an image of a person, it matters where they look.

It makes sense to use people in your design—it’s a design element that attracts attention. But it also matters where their eyes are looking.

Several heat map studies have shown that people follow the direction of a model’s eyes. If you need to get people to focus not only on the beautiful woman but the content next to her, make sure she’s looking at that content.

It’s also important to convey emotion. Having a person convey emotion can have a big impact on conversion rates. This study found that a person conveying emotion can have a larger impact on conversions than a calm person looking at the call to action.

Your best option may be to combine these two approaches—use an emotion-conveying person who’s also looking at the desired spot on the page.

7. Men are visual; women seek information.

When asked to view profiles of people on a dating site, this study found a clear difference between men and women. Men were more visual when looking at a profile of a person, focusing on the images; women tended to read more of the info provided.

In another study, men spent 37% more time looking at the woman’s chest than women did, whereas women spent 27% more time looking at the ring finger. The study concluded, that “men are pervs, women are gold-diggers.”

8. Abandon automatic image carousels and banners for better click-through rates.


This study concluded that, on two sites where users had a specific task on their mind, the main banners were completely ignored, including the animated version. Automatic image carousels and banners are generally not a good idea. They generate banner blindness and waste a lot of space.

The same study found an exception to this rule in of the sites—a banner on ASOS’s homepage that captured the attention of participants better than the other sites. How was it different? It looked less like a banner and was better integrated into the page.

9. Use contrast wisely to guide your visitors.

After testing a landing page with heat maps, TechWyse found out just how important color contrast is. A non-clickable, informational element about pricing on the homepage won the most attention because of its color contrast with the surrounding area.

After a slight redesign, the scanning patterns of visitors aligned with what the company needed.

10. 60-year-olds make twice as many mistakes as 20-year-olds.

When your target audience is elderly, make your website as easy to use and clutter-free as possible. When testing 257 correspondents in a remote user test, the failure rate for tasks was 1.9 times greater..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Email is one of the few marketing channels that spans the full funnel. You use email to raise awareness pre-conversion. To stay connected with content subscribers. To nurture leads to customers. To encourage repeat purchases or combat churn. To upsell existing customers.

Getting the right email to the right person at the right time throughout the funnel is a massive undertaking that requires a lot of optimization and testing. Yet, even some mature email marketing programs remain fixated on questions like, “How can we increase the open rate?” Moar opens! Moar clicks!

What about the massive bottom-line impact email testing can have at every stage of the funnel? How do you create an email testing strategy for that? It starts by understanding where email testing is today.

The current state of email testing

According to the DMA, 99% of consumers check their email every single day. (Shocking, I know.)

In 2014, there were roughly 4.1 billion active email accounts worldwide. That number is expected to increase to nearly 5.6 billion before 2020. In 2019, email advertising spending is forecasted to reach $350 million in the United States alone.

Despite the fact that email continues to thrive over 40 years after its inception, marketers remain fixated on top-of-funnel engagement metrics. 

According to research from AWeber:

  • 434 is the average number of words in an email.
  • 43.9 is the average number of characters in an email subject line.
  • 6.9% of subject lines contain Emojis.
  • 60% of email marketers use sentence case in subject lines.

According to benchmarks from Mailchimp:

  • 0.29% is the average email unsubscribe rate in the architecture and construction industry.
  • 1.98% is the average email click rate in the computers and electronics industry.
  • 0.07% is the average hard bounce rate in the daily deals and e-coupons industry.
  • 20.7% is the average open rate for a company with 26–50 employees.

But why are these statistics the ones we collect? Why do blog posts and email marketing tools continue to prioritize surface-level testing, like subject lines (i.e. open rate) and button copy (i.e. click rate)?

Email testing tools offer testing of basic elements that, quite often, fails to connect to larger business goals. Why email testing often falls flat

Those data points from AWeber and Mailchimp are perhaps interesting, but they have no real business value.

Knowing that the average click rate in the computers and electronics industry is 1.98% is not going to help you optimize your email marketing strategy, even if you’re in that industry. 

Similarly, knowing that 434 is the average number of words in an email is not going to help you optimize your copy. That number is based on only 1,000 emails from 100 marketers. And, of course, there’s no causal link. Who’s to say length impacts the success of the emails studied?

For the sake of argument, though, let’s say reading that 60% of email marketers use sentence case in their subject lines inspired you to run a sentence case vs. title case subject-line test.

Congrats! Sentence case did in fact increase your open rate. But why? And what will you do with this information? And what does an open rate bump mean for your click rate, product milestone completion rates, on-site conversion rates, revenue, etc.?

A test is a test is a test. Regardless of whether it’s a landing page test, an in-product test, or an email test, it requires time and resources. Tests are expensive—literally and figuratively—to design, build, and run. 

Focusing on top-of-funnel and engagement metrics (instead of performance metrics) is a costly mistake. Open rate to revenue is a mighty long causal chain. 

If you’re struggling to connect email testing and optimization to performance marketing goals, it’s a sign that something is broken. Fortunately, there’s a step-by-step process you can follow to realign your email marketing with your conversion rate optimization goals.

The step-by-step process to testing email journeys

Whether you’re using GetResponse or ActiveCampaign, HubSpot or Salesforce, what really matters is that your email marketing tool is collecting and passing data properly.

Whenever you’re auditing data, ask yourself two questions:

  1. Am I collecting all of the data I need to make informed decisions?
  2. Can I trust the data I’m seeing?

To answer the first question, have your optimization and email teams brainstorm a list of questions they have about email performance. After all, email testing should be a collaboration between those two teams, whether an experimentation team is enabling the email team or a conversion rate optimization team is fueling the test pipeline.

Can your data, in its current state, answer questions from both sides? (Don’t have a dedicated experimentation or conversion rate optimization team? Email marketers can learn how to run tests, too.)

With email specifically, it’s important to have post-click tracking. How do recipients behave on-site or in-product after engaging with each email? Post-click tracking methods vary based on your data structure, but there are five parameters you can add to the URLs in your emails to collect data in Google Analytics:

  1. utm_source;
  2. utm_medium;
  3. utm_campaign;
  4. utm_term;
  5. utm_content.

Learn how to use these parameters to track email to on-site or in-product behavior here.

UTM parameters connect in-email behavior to on-site behavior. (Image source)

The second issue—data integrity—is more complex and beyond the scope of this post. Thankfully, we have another post that dives deep into that topic.)

Once you’re confident that you have the data you need and that the data is accurate, you can get started.

1. Mapping the current state

To move away from open rate and click rate as core metrics is to move toward journey-specific metrics, like:

  • Gross customer adds;
  • Marketing-qualified leads;
  • Revenue;
  • Time-to-close.

By focusing on the customer journey instead of an individual email, you can make more meaningful optimizations and run more impactful tests.

The goal at this stage is to document and visualize as much as you can about the current state of the email journey in question. Note any gaps in your data as well. What do you not know that you wish you did know?

It all starts with a deep understanding of the current state of the email journey in question. You can use a tool like Whimsical to map it visually.

An example from Whimsical of how to map a user flow. While their example maps on-site behavior, a similar diagram works for email, too. (Image source)

 Be sure to include:

  • Audience data;
  • Subject line and preview text for each email;
  • Days between each email;
  • Data dependencies;
  • Automation rules;
  • Personalization points, and alternate creative and copy (if applicable);
  • Click rates for each call to action (CTA);
  • On-site destinations and their conversion rates (for email, specifically).

Really, anything that helps you achieve a deep understanding of who is receiving each email, what you’re asking them to do, and what they’re actually doing.

Take this email from Amazon Web Services (AWS), for example:

There are a ton of different asks within this email. Tutorials, a resource center, three different product options, training and certification, a partner network⁠—the list goes on.

Your current state map should show how recipients engage with each of those CTAs, where each CTA leads, how recipients behave on-site or in-product, etc. Does the next email in the sequence change if a recipient chooses “Launch a Virtual Machine” instead of “Host a Static Website” or “Start a Development Project,” for example?

Your current state map will help answer questions like:

  • How does the email creative and copy differ between segments?
  • Who receives each email and how is that decision made?
  • Which actions are recipients being asked to take?
  • Which actions do they take most often?
  • Which actions yield the highest business value?
  • How frequently are they asked to take each action and how quickly do they take it on average?
  • What other emails are these recipients likely receiving?
  • What on-site and in-product destinations are email recipients being funneled to?
  • What gaps exist between email messaging and the on-site or in-product messaging?
  • Where are the on-site holes in the funnel?
  • Can post-email, on-site, or in-product behavior tell us anything about our email strategy?
2. Mapping the ideal state

Once you know what’s true now, it’s time to find optimization opportunities, whether that’s an obvious fix (e.g. an email isn’t displaying properly on the iPhone 6) or a test idea (e.g. Would reducing the number of CTAs in the AWS email improve product milestone completion rates?).

There are two methods to find those optimization opportunities:

  1. Quantitatively. Where are recipients falling out of the funnel, and which conversion paths are resulting in the highest customer lifetime value (CLTV)?
  2. Qualitatively. Who are the recipients? What motivates them? What are their pain points? How do they perceive the value you provide? What objections and hesitations do they present?

The first method is fairly straightforward. Your current state map should present you with all of the data you need to identify holes and high-value conversion paths.

The second method requires additional conversion research. (Read our comprehensive guide to conducting qualitative conversion research.)

Combined, these two methods will give you a clear idea of your ideal state of the email journey. As best you can, map that out visually as well.

How does your current state map compare to your ideal state map? They should be very different. It’s up to you to identify and sort those differences:

InsightsQuick FixesTest IdeasData Gaps
What did you learn during this entire journey mapping process that other marketers and teams will find useful?What needs to be fixed or implemented right away? This is a no-brainer that doesn’t require testing.What needs to be tested before implementation? This could be in the form of a full hypothesis or simply a question.What gaps exist in your measurement strategy? What’s not being tracked?
3. Designing, analyzing, and iterating

Now it’s time to design the tests, analyze the results, and iterate based on said results. Luckily, you’re reading this on the CXL blog, so there’s no shortage of in-depth resources to help you do just that:

Short on time? Read through our start-to-finish post on A/B testing.

Common pitfalls in email testing—and how to avoid them 1. Testing the email vs. the journey

It’s easier to test the email than the journey. There’s less research required. The test is easier to implement. The analysis is more straightforward—especially when you consider that there’s no universal customer journey.

Sure, there’s the nice, neat funnel you wax poetic about during stakeholder meetings: session to subscriber, subscriber to lead, lead to customer; session to add to cart, cart to checkout, checkout to repeat purchase. But we know that linear, one-size-fits-all funnels are a simplified reality.

The best customer journeys are segmented and personalized, whether based on activation channel, landing page, or onboarding inputs. Campaign Monitor found that marketers who use segmented campaigns report as much as a 760% increase in revenue.

When presented with the choice of running a simple subject line A/B test in your email marketing tool or optimizing potentially thousands of personalized customer journeys, it’s unsurprising many marketers opt for the former. 

But remember that email is just a channel. It’s easy to get sucked into optimizing for channel-level metrics and successes, to lose sight of what that channel’s role is in the overall customer journey.

Now, let’s say top-of-funnel engagement metrics are the only email metrics you can accurately measure (right now). You certainly wouldn’t be alone in that struggle. As marketing technology stacks expand, data becomes siloed, and it can be difficult to measure the end-to-end customer journey.

Is email testing still worth it, in that case?

It’s a question you have to ask yourself (and your data). Is there an inherent disadvantage to improving your open rate or click rate? No, of course not (unless you’re using dark patterns to game the metrics). 

The question is: is the advantage big enough? Unless you have an excess of resources or are running out of conversion points to optimize (highly unlikely), your time will almost certainly be better spent elsewhere.

2. Optimizing for the wrong metrics

Optimization is only as useful as the metric you choose. Read that again.

All of the research and experimentation in the world won’t help you if you focus on the wrong metrics. That’s why it’s so important to go beyond boosting your open rate or click rate, for example. 

It’s not that those metrics are worthless and won’t impact the bigger picture at all. It’s that they won’t impact the bigger picture enough to make the time and effort you invest worth it. (The exception being select large, mature programs.)

Val Geisler of Fix My Churn elaborates on how top-of-funnel email metrics are problematic:

Most people look at open rates, but those are notoriously inaccurate with image display settings and programs like Unroll.me affecting those numbers. So I always look at the goal of the individual email. 

Is it to get them to watch a video? Great. Let’s make sure that video is hosted somewhere we can track views once the click happens. Is it to complete a task in the app? I want to set up action tracking in-app to see if that happens. 

It’s one thing to get an email opened and even to see a click through, but the clicks only matter if the end goal was met.

You get the point. So, what’s a better way to approach email marketing metrics and optimization? By defining your overall evaluation criterion (OEC).

To start, ask yourself three questions:

  1. What is the tangible business goal I’m trying to achieve with this email journey?
  2. What is the most effective, accurate way to measure progress toward that goal?
  3. What other metric will act as a “check and balance” for the metric from question two? (For example, a focus on gross customer adds without an understanding of net customer adds could lead to metric gaming and irresponsible optimization.)

In “Advanced Topics in Experimentation,” Ronny Kohavi of Microsoft explains how an experience at Amazon taught him that engagement metrics are easy to game:

The question is what OEC should be used for these programs? The initial OEC, or “fitness function,” as it was called at Amazon, gave credit to a program based on the revenue it generated from users clicking-through the e-mail.

There is a fundamental problem here: the metric is easy to game, as the metric is monotonically increasing: spam users more, and at least some will click through, so overall revenue will increase. This is likely true even if the revenue from the treatment of users who receive the e-mail is compared to a control group that doesn’t receive the e-mail.

Eventually, a focus on CLTV prevailed:

The key insight is that the click-through revenue OEC is optimizing for short-term revenue instead of customer lifetime value. Users that are annoyed will unsubscribe, and Amazon then loses the opportunity to target them in the future. A simple model was used to construct a lower bound on the lifetime opportunity loss when a user unsubscribes. The OEC was thus 

Where 𝑖 ranges over e-mail recipients in Treatment, 𝑗 ranges over e-mail recipients in Control, and 𝑠 is the number of incremental unsubscribes, i.e., unsubscribes in Treatment minus Control (one could debate whether it should have a floor of zero, or whether it’s possible that the Treatment actually reduced unsubscribes), and unsubscribe_lifetime_loss was the estimated loss of not being able to e-mail a person for “life.”

Using the new OEC, Ronny and his team discovered that more than 50% of their email marketing programs were negative. All of the open- rate and click- rate experiments in the world wouldn’t have addressed the root issue in this case.

Instead, they experimented with a new unsubscribe page, which defaulted to unsubscribing recipients from a specific email program vs. all email communication, drastically reducing the cost of an unsubscribe.

Amazon learned that creating multiple lists (rather than a single “unsubscribe”) was key to increasing CLTV. 3. Skimping on rigor

Email marketing tools make it easy to think you’re running a proper test when you’re not.

Built-in email testing functions are the equivalent of on-site testing tools flashing a green “significant” icon next to a test to signal it’s done. (We know that’s not necessarily true.)

Email tests require the same amount of rigor and scientific integrity as any other test, if not more. Why? Because there are many little-known nuances to email as a channel that don’t exist on-site, for example.

Val sees companies calling (and acting upon) email tests too soon and allowing external validity threats to seep in:

Too many people jump to make changes too soon. Email should be tested for a while (every case varies, of course), and no other changes should be made during that test period.

I have people tell me they changed their pricing model or took away the free trial or did some other huge change in the midst of testing email campaigns. Well that changes everything! Test email by itself to know if it works before changing anything..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Has your company’s customer retention rate increased, decreased, or maintained the status quo over the past five years?

Are you actively working on retention? Have you outlined and initiated a formal customer retention strategy?

A study by Harvard Business School found that increasing customer retention by even 5% can increase profits by 2595%. And yet, the 2019 CMO Survey found that nearly half of CMOs don’t expect to improve retention this year.

Compare that to more than two-thirds of CMOs who expect to increase customer acquisition, increased purchase volume, and more effective cross-selling:

That’s too bad. Because a Manta report found that 61% of small businesses surveyed indicated that more than half of their revenue came from repeat customers. Furthermore, the study found that repeat customers spend 67% more than new customers.

I can safely say that about three-quarters of my clients did not have a formal strategy to retain and cultivate their current customers prior to hiring us.

Many also felt that they understood their industry, target markets, and trends. However, when we conducted our research, we found plenty of areas that had evolved or changed.

Opinions don’t get people back. Understanding the data about what they need to return does. To develop your customer retention strategy, follow this four-phase process:

  1. Research your customers to find out what they need most.
  2. Develop the product, site, and offers based on existing customer feedback.
  3. Evaluate whether a loyalty or rewards program will drive repeat business.
  4. Make your retention strategy personal.
1. Research your customers to find out what they need most.

A HubSpot survey found that companies that put data at the core of their marketing/sales decisions improved marketing ROI by 15–20%.

The same article noted that companies spending 30% more time analyzing marketing performance data earned 3X higher open rates and 2X click through rates for email. ( should be about far more than opens and clicks, though.)

So what kind of marketing data should you analyze if you want to improve customer retention?

  • UX data. If the shopping experience is full of friction, why would anyone return?
  • Email performance. What happens (or doesn’t happen) in post-purchase emails to convert first-time buyers into repeat buyers?
  • Customer service. Poor customer service is why 82% of U.S customers leave a business – so perhaps looking at customer service scores might be where you start?

Improving each of these areas might boost your customer retention. But only scrutinizing your customer-facing metrics and direct customer feedback will provide you with your top priorities.

Of course, you could always work backward to find the source(s) of the problem, too.

Do you understand why your customers are leaving?

Some 68% of customers stop doing business with a company due to feeling that the company was indifferent toward them:

Forget about your company for a second and think about the companies you do business with:

  • How many do you feel actually care about you?
  • What attempts do they make to collect your feedback or offer incentives for you to come back.
  • Do you think they even notice when you leave?

Alex Turnbull of GrooveHQ shares the simple three-line email they ask newly lost customers: “Why did you cancel?”

“As a founder, one of the most painful things in the world to hear is criticism of your baby. Especially sharp, stinging criticism from a customer that you’ve now let down […]

There’s no way around it, it still sucks […] But actively collecting and leveraging that feedback has become one of the most important drivers for continuous improvement at Groove.”

As much as it sets you up for negative feedback, including an exit survey can provide you with extra insight as to how to improve your product, your service, or overall offer.

Groove’s open-ended survey provided insight into:

  • Specific issues active customers weren’t telling them about;
  • Hangups in the user experience;
  • Workflow inefficiencies for use cases they hadn’t considered.

What’s more is that an A/B test of the message—changing “Why did you cancel?” to “What made you cancel?”—provided a near 19% response rate.

“Since we’ve started doing open-ended exit surveys eight months ago, we’ve been able to make a lot of positive changes and fixes to Groove. Retention, along with many of our usage metrics, have improved as a result of some of these changes.

We’ve even started testing recovery campaigns for former customers whose issues we’ve fixed; I’ll write about that in a future post, but the early results are very promising.”

This is just one of the many reasons we’ve emphasized creating feedback loops. A system that provides insight automatically can help prioritize major issues to reduce churn, increase customer lifetime value, and support customer retention.

Canceling customers, of course, are far from the only group that will provide insight.

2. Develop the product, site, and offers based on existing customer feedback.

Your existing customer base will tell you a lot about what they need and want to keep coming back.

For instance, HubSpot Ideas is a forum for feature request—users can submit and upvote ideas, helping HubSpot understand which development projects may have the highest existing demand.

Without these methods of collecting feedback, future improvements would be mostly guesswork, severely reducing the chances of actually solving critical problems that keep users engaged or coming back (not to mention that forum’s like HubSpot’s yield qualitative insights for free).

It’s not just SaaS sites that can take advantage of customer feedback forums and in-line customer support either.

Case study: How Terminix used customer feedback to recover $20 million in lost revenue

Terminix is the world’s largest pest control company, with more than 2.8 million customers spread across 47 U.S. states and 11 countries.

Over the years, their acquisition campaigns have succeeded by combining humor with a serious tone that lets you know that they take pest control seriously.

But as successful as their acquisition strategies were, too many customers cancelled—they were losing a third of their clients (or approximately $60 million) annually.

They hired Chief Outsiders to analyze exiting customer data along with other customer satisfaction information. They discovered issues stemming from three areas:

  • Service quality;
  • Communication;
  • Customer expectations.

In response, the company initiated a new training program for employees that focused on retaining customers and overcoming easy objections. They also incorporated a satisfaction survey program to gain a fresh perspective on new customers’ needs and desires.

This also led to a change in Terminix’s product offering—offering quarterly and annual programs instead of their previous “monthly only” service.

The result? Customer turnover dropped by one third, which translated into approximately $20 million recovered in annual revenue.

3. Evaluate whether a loyalty or rewards program will drive repeat business.

Beyond delivering a better customer experience based on feedback, what are other ways to increase customer retention?

A loyalty program might seem like a no-brainer, and, increasingly, companies are adopting them—some 86% of small businesses had them in 2018, up from just 66% a few years ago.

For many, the benefits of a loyalty program might include:

But it’s not as simple as tacking on a loyalty program and expecting customers to start “living” at your store. (Indeed, according to the 2018 study above, only one in four companies with a loyalty program enrolls at least half their customers.)

For industries with thin profit margins, offering an incentive like 2% off isn’t very enticing, and in many verticals such an offer might require a significant lift in sales to break even.

The major issue with many loyalty and rewards programs is that there’s no real differentiation—nothing there to make the customer feel special. As a result, it’s easy to take it or leave it.

Perhaps that’s why Amazon Prime has been so successful.

The benefits to members continue to increase (the image above is ample evidence), and buyers have rewarded Amazon with their loyalty. The gap in spending between a Prime and non-Prime member is remarkable:

Starbucks might also be onto something with Starbucks Rewards. By using their loyalty card to make purchases on their website or at a store, you earn “Star Points.”

The more points you earn, the better perks you get:

From Starbucks perspective, I imagine this is a pretty significant win because many of the rewards are being offered after they’ve made a good profit off the customer.

Further, they get tons of data about what you buy and when you buy it. The “custom” offers you get can easily be personalized prompts to encourage you to go back when, algorithmically, your loyalty appears to waver.

If you’re looking to start a loyalty program, you’d better run the numbers—in granular detail— first. Then, when you unroll the program, start by targeting your most frequent buyers first. Listen to their feedback and develop the program based on their feedback.

After all, the Starbucks Rewards program came out of the company’s own forum:

Greg Ciotti has created an excellent article on creating sticky loyalty programs, and HubSpot has a great overview of various reward programs.

4. Make your retention strategy personal.

Appealing to your customers’ emotions and making every customer feel like they truly matter goes a long, long way. According to research by Peppers & Rogers Group, most customer behavior is emotionally based:

  • 60% of all customers stop dealing with a company because of what they perceive as indifference on the part of salespeople.
  • 70% of customers leave a company because of poor service.
  • 80% of defecting customers describe themselves as “satisfied” or “very satisfied” just before they leave. (Surveys alone won’t tell you everything.)

In your retention efforts, focus your communications with existing customers around how they would like to be viewed.

Greg Ciotti (again!) talks about the concept of implicit egoism. As it relates to consumerism, it’s the idea that brand choices are tied to personal identity.

Purchasing a luxury vehicle like a Mercedes-Benz, for example, is a status symbol, and makes their customer feel more elite.

Their customers are so elite that if they want to get into a new Mercedes when their lease is expiring, the company will simply waive four payments, and the customer will get the new car right away.

Urban Outfitters on the other hand, uses typography and design to communicate a real hipster vibe.

It’s newsletter doesn’t just deliver sales and promotions but also videos and music from obscure bands.

That level of emotional design aims to build a sense of community, belonging, and attachment to the brand. The emotional connection, in turn, makes it easier and more enjoyable to buy from those stories instead of competitors.

And if you’re wondering how a company like Urban Outfitters builds a retention strategy based on brand and emotional connection, just read how they describe their customers.

Conclusion

As with so many successful marketing strategies, extensive research into your customers’ behaviors and demands can help identify the best retention strategies for your business.

One key difference for retention efforts is to put more emphasis on committed, existing customers and those who recently left.

While it make take extra effort, low turnover will save you time and money in the long run. Remember, making the sale is only the first step.

The post Customer Retention Strategies: 4 Phases for Development appeared first on CXL.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every company dreams about creating high-performing teams. For us at OWOX, that dream centered on our analytics department, which included 12 specialists—junior analysts, mid-level analysts, senior analysts, and QA specialists.

Collectively, our analysts were responsible for consulting clients on data analysis and improving our marketing analytics tool. While our company focuses on analytics, our challenge was not unique—many in-house marketing departments and agencies struggle to measure and improve the efficiency of their teams.

In theory, our analytics department would work seamlessly to make the whole business profitable. In real life, it often struggled for constant improvement—new people, new business goals, and new technologies disrupted steady progress.

How could we get our analytics team to spend more time doing the things they were best at, as well as those that were most valuable to the company? Here’s the start-to-finish process we implemented to benchmark team performance and then use that data to increase efficiency.

Our baseline: Mapping the ideal analyst workload

What is the most effective mix of responsibilities for each analyst position under perfect conditions? Our first step was to diagram the ideal division of work for our analytics team:

So, for example, in our company, we expected senior analysts to spend:

  • 45% of their time on tasks from clients; 
  • 30% of their time on management and coaching;
  • 10% of their time on tech and business education;
  • 10% of their time on process development;
  • 5% of their time on internal tasks.

This ideal task distribution, as we later learned, was far from reality. That gap resulted from eight key challenges faced by our team. 

8 ways our analytics team was struggling 

A dream team can’t be gathered at once; it can only be grown. Analysts in our analytics department expect to grow professionally and be given a lot of challenging tasks.

To deliver on that promise of professional growth, we had to confront eight key problems facing our team:

1. Inefficient task distribution for each position

At some point, everybody gets sucked into a routine and doesn’t ask if the current way is the only way to do their work efficiently:

  • Our senior analysts had no time to teach and coach new employees, but they also had no time for managerial tasks because they were overloaded with client work.
  • Our mid-level analysts didn’t have enough time for R&D and improving their skills.
  • Our junior analysts were just studying all the time. We hadn’t passed them tasks so that they could dive into real work experience.

Each of these realizations became clear after we visualized the gap between expectations and reality (detailed in the next section).

2. No measurement of efficiency for each team member

We all knew that the ideal workload above was just a model. But how far from this model were we? We didn’t know how much time a particular employee spent in meetings, worked on client tasks, or was busy with R&D.

We also didn’t know how efficiently each analyst performed a task compared to the rest of the team.

3. Incorrect task time estimates

We couldn’t estimate precisely the time needed for each task, so we sometimes upset our clients when we needed more time to finish things.

4. Repeating mistakes

Whenever a junior analyst had to solve a complicated task for the first time, they made the same predictable mistakes. Those mistakes, in turn, had to be identified and corrected by their mentor, a senior analyst, before the tasks could enter production.

Even if they didn’t make any mistakes, it took them longer to complete the task than a middle or senior analyst.

5. Unintentional negligence

Sometimes, client emails would get lost, and we exceeded the response time promised in our service-level agreement (SLA).  (According to our SLA, our first response to a client email has to be within four hours.)

6. Speculative upsells

We knew how much time we spent on each task for the client. But this data wasn’t aligned with the billing information from our CRM and finance team, so our upselling was based only on gut feeling.

Sometimes it worked; sometimes it failed. We wanted to know for sure when we should try to upsell and when we shouldn’t.

7. Generic personal development plans

We had the same personal development plan for every analyst, regardless of strengths and weaknesses. But development plans can’t be universal and effective at the same time. 

For our analysts, personalization of development plans was key to faster growth. 

8. Lack of knowledge transfer

Our senior analysts were swamped with work and had no time to pass their skills and knowledge to their junior colleagues. The juniors grew slowly and made lots of mistakes, while seniors had nobody to pass tasks and responsibilities to.

It was clear we had plenty of room to improve, so we decided to bring all the necessary data together to measure the efficiency of our analysts. Let’s look through these steps in detail.

How we measured the performance of our analytics team

This process started by defining the problems and questions outlined above. To answer them, we knew that we would need to capture before-and-after metrics. (Top of mind were the words of Peter Drucker: “You can’t manage what you can’t measure.”)

Here are the four steps we took to gather the necessary data and create a real-time dashboard for our analytics team.

1. Identify the sources of the data.

Since most of our questions connected to analyst workloads, we gathered data from the tools they were using:

  1. Google Calendar. This data helped us understand how much time was spent on internal meetings and client calls.
  2. Targetprocess. Data from our task-management system helped us understand the workload and how each of the analysts managed their tasks.
  3. Gmail. Email counts and response statuses gave us information about analysts, projects, and overall correspondence with clients and the internal team. It was significant for monitoring SLA obligations. 
2. Pull the necessary data and define its structure.

We gathered all data from those sources into Google BigQuery using Google Apps Script. To translate data into insights, we created a view with the fields we needed.

Here’s a table showing the fields we pulled into the view:

Our key fields were analyst, date, and project name. These fields were necessary to merge all the data together with correct dependencies. Once the data was ready, we could move on to the dashboard.

3. Prototype the dashboard.

Don’t try to make a dashboard with all the metrics you can imagine. Focus on the essential metrics that will answer your questions—build an MVP, not a behemoth.

Typically, best practices of dashboard prototyping are to:

  • Define the essential metrics that will answer your questions.
  • Ensure that KPI calculation logic is extremely transparent and approved by the team. 
  • Prototype on paper (or with the help of prototyping tools) to check the logic.
4. Build the dashboard

We used Google Data Studio because it’s handy, is a free enterprise-level tool, and integrates easily with other Google products.

In Data Studio, you can find templates designed for specific aims and summaries, and you can filter data by project, analyst, date, and job type. To keep the operational data current, we updated it on a daily basis, at midnight, using Apps Script. 

Let’s look closer at some pages of our dashboard.

Department workload page

We visually divided this page into several thematic parts:

  • Project;
  • Task distribution by role;
  • Time spent by task type.

With this dashboard, we could see how many projects we had at a given time in our analytics department. We could also see the status of these projects—active, on hold, in progress, etc.

Task distribution by role helped us understand the current workload of analysts at a glance. We could also see the average, maximum, and minimum time for each type of task (education, case studies, metrics, etc.) across the team.

Analyst workload page

This page told us what was happening inside the analytics team—time spent by analyst, by task, and by the whole team:

  • Time spent on tasks and meetings;
  • Percentage of emails answered according to the SLA;
  • Percentage of time spent on each task by a given analyst;
  • Time that a given analyst spent on tasks compared to the team average.

This was useful to understand how much time tasks usually took and whether a specialist could perform a task more efficiently than a junior-level analyst.

Project workload page

This page analyzed the efforts of the whole team and individual analysts at the same time. Metrics included:

  • Tasks across all projects or filtered by project;
  • Time spent on meetings and tasks;
  • Share of emails answered according to the SLA;
  • Statistics for an individual project (with the help of filters);
  • Average, minimum, and maximum time for each type of task in a project.

It also included the analyst and backup analyst for each project, as well as the number of projects managed by a given analyst:

We can’t show you all of our dashboards and reports because some contain sensitive data. But with this dashboard in place, we:

  1. Realized that the workload of an analyst is far from what we expected and that average values can hide our growth zones.
  2. Proved that most of our analysts (~85%) answered emails on time.
  3. Mapped the typical tasks that we ran into, how long it usually takes to accomplish them, and how the time for each particular task can vary.
  4. Found weaknesses and strengths for each analyst to customize their personal development plan.
  5. Found areas for automation.

The number of dashboards isn’t as important as seeing the changes we made using them. The latter translated our measurements into strategies for team improvements.

Acting on our data to improve the analytics team

Let’s have a closer look at how we used the dashboard to begin to solve some of the problems we mentioned above.

Improving task distribution for each team member

When we compared the real task distribution with the ideal task distribution, we were, shall we say, disappointed. It was far from perfect.

Our senior analysts worked on client tasks 1.5 times more than planned, and our junior analysts were studying almost all the time without practicing their skills.

We started to improve the situation with a long process of task redistribution. And after some time, we saw improvement:

While everything looked better in the dashboard, we still had room to grow.

By aligning everything to the average values, we were trapped in a typical stats problem: treating the average as the real-world scenario. The average is a mathematical entity, not a reflection of real life. In real life, there’s nothing more blinding than focusing on the average.

When we drilled down to a particular role or analyst, the data looked quite different. Here, for example, we have data for Anastasiia, a senior analyst. On the left is the ideal, in the middle is the average, and on the right is her personal division:

The picture changed dramatically from the senior analyst average and the reality for Anastasiia. The time spent on client tasks was much higher than it should’ve been, and almost no time was spent coaching new employees.

That could be for multiple reasons:

  • Anastasiia is overloaded with client tasks. In this case, we need to take some of her tasks and pass them to another analyst.
  • Anastasiia didn’t fill out the task management system properly. If this is the case, we need to draw her attention to its importance.
  • Anastasiia might not be a fan of her managerial role. We need to talk and figure it out.

We redistributed some of Anastasiia’s tasks and discussed the bottlenecks that were eating the biggest part of her time. As a result, her workload became more balanced.

If we had only looked at the average stats for the department, we never would’ve solved the problem.

Automation and knowledge transfer to minimize mistakes

We had lots of atypical work in our department. That’s why it was hard to predict how long it would take to complete it (and which mistakes would appear). 

We started improving our task estimation process by classifying and clustering tasks using tags in our task management system, such as R&D, Case Study, Metrics, Dashboards, and Free (for tasks we didn’t charge for).

When analysts created a new task, they had to define its type using tags. Tagging helped us measure which jobs we ran into most often and decrease repeated mistakes by automating typical reports.

Below, you can see a dashboard showing the minimum, maximum, and average time spent on different types of jobs, as well as their frequency:

This helped us estimate the time required for typical tasks and became a basis for estimating unusual tasks. An average is a useful estimate for a new client, and outliers helped us understand how much time extra features may take.

We also looked closely at the most frequent tasks and those that had the maximum time spent. To eliminate mistakes in these tasks, our first step was to write detailed guides on how to perform each task.

For example, to create a report on cohort analysis, the guide included

  • Initial data;
  • Business objectives;
  • Limitations;
  • Patterns;
  • Self-checks;
  • What to pay attention to.

These guides helped pass along knowledge and avoid typical mistakes. But we also had to deal with unintentional mistakes.

Automation can help prevent recurring, minor errors. We built (and sell) our own tool to automate reports, like the example below for CPAs:

We got rid of hundreds of unintentional mistakes and the never-ending burden of fixing those mistakes; boosted our performance and total efficiency; and saved loads of time for creative tasks.

Decreasing unintentional negligence

Client tasks take approximately half our analysts’ time. Even so, sometimes something goes wrong, and answers to important emails from clients are delayed beyond the four-hour commitment in our SLA.

This dashboard helped us monitor analyst adherence to our SLA commitments:

When we recognized that the percentage of responses within four hours wasn’t perfect, we created notifications in Slack to serve as reminders.

To activate a reminder, an analyst sent a status (described below) to a separate email account without copying the client. Here’s the list of statuses we developed for the system of reminders:

Our analysts got notifications in Slack if the SLA time for a response was almost over, or if they had promised to write an email “tomorrow”:

Personal development plans

When an analyst created a task in Targetprocess, they estimated the time needed based on their previous experience (“effort”). Once they’d finished the task, they entered how much time was actually spent.

Comparing these two values helps us find growth zones and define the difficulty of execution:

For example, suppose an analyst spent much more time than average on a task with the Firebase tag. If that’s caused by low technical knowledge, we’ll add Firebase to their personal development plan.

By analyzing analysts’ efficiency on the individual level—while focusing on the educational opportunity—we solved our problem of tarring all analysts with the same brush for development plans.

Now, each specialist had an exceptionally relevant step-by-step guide for self-improvement to help our specialists grow faster.

Conclusion

We still have some questions to dig into in our department. Launching analytics for a real-life team is an iterative process.

Where will we go next? Fortunately, we have strong analytical instruments in our hands to help not only our clients but also ourselves. As you look at your situation, here are key takeaways:

  • The sooner, the better. Collecting, merging, and preparing data is about 75% of your efforts. Make sure that you trust the quality of the data you’re collecting.
  • Start with an MVP dashboard. Focus on critical KPIs. Pick no more than 10 metrics.
  • Define what you’re going to do if a metric changes dramatically at 5 p.m. on Friday. You should have a plan if a metric rises or falls unexpectedly. If you have no idea why you should have such a plan for a certain metric, think over whether you need to track it at all. 
  • An average is just an average. Look at the extremes. Challenge the average when it comes to managing and developing people.
  • Use transparent and easily explained algorithms. Make sure your team understands the logic behind the algorithms and is okay with it, especially if KPIs influence compensation.
  • It’s easier to automate tracking than to make people log time. But you shouldn’t make it look like you’re spying on the people working for you. Discuss all your tools and steps for improvement with the team.

The post How to Create a High-Performing Analytics Team appeared first on CXL.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every nonprofit that accepts online donations has a donation page. But there’s a big difference between having a donation page and having an effective donation page.

Your donation page may follow purported “best practices,” but you could still be losing donors and revenue. In fact, our experience running over 1,500 online fundraising A/B tests has shown that traditional “best practices” are rarely the most effective way to increase donations.

In light of this, I want to share strategies—based on our research and experimentation, not just assumptions—that have proven to increase conversions, donations, and revenue. Often, these tactics go beyond or, in some cases, contradict popular “best practices.”

1. Choose the right type of donation page.

One of the most common mistakes that new online fundraisers make is assuming that a single donation page is sufficient. In reality, donors come to your donation page with a huge variety of motivations. 

If you send all of your traffic to a single donation page, you’ll likely see poor results. But if you utilize three types of donation pages—general, campaign, and instant—you’ll be able to align your pages with the motivations of your donors. Let’s cover each type in more detail.

General donation page

The general donation page on your website is your primary donation page. Every organization that accepts donations online has one. But to optimize this page, you must understand that visitors to your general page will always have a wide variety of reasons for giving.

To make this page more effective, keep these ideas in mind:

  1. Use copy to communicate why someone should give using broad reasons, rather than focusing on a specific project or fund designation.
  2. Keep your message clear and concise, using bullets.
  3. Offer a free gift for a specific giving level to drive up conversions.
A general donation page case study

In the experiment below, the organization began with a donation page (left) that was virtually devoid of copy. They had one small line of text in red that said: “Together, we’re writing the next chapter of Illinois’ comeback story.”

Many fundraisers assume that general donation page visitors are already motivated to give, and so they neglect to add much copy that explains why giving is important.

But in reality, even highly motivated donors have the potential to abandon your page. In fact, according to M+R’s 2018 benchmark report, 83% of all donation page visitors leave without giving.

The organization below tested a new version of their donation page that included a lot more copy. The updated page:

  • Explained what the organization did in broad terms.
  • Used bold text and headers to make it easily scannable.
  • Included a call-to-action headline with a specific donation ask.

The result? The new version of the donation page led to a 150% increase in donations.

Campaign donation page

It’s not enough to send all of your traffic (whether via email, advertising, etc.) to your general donation page. You need to create a dedicated campaign donation page for specific donation appeals.

Dedicated campaign pages work because your donation ask is made in a particular context. For example, if you’re raising money to build a new building and you send your potential donors an email about it, your campaign donation page copy needs to focus on that specific project. 

If you focus on the broad reasons why your organization is great (as you would on your general donation page), donors won’t be confident that their money is going to the right place. They also won’t have a full understanding of the impact their gift can make.

In the example below, this organization converted their general donation page into a dedicated campaign donation page by making five distinct changes. This new page resulted in a 50% increase in revenue.

When creating a campaign page, keep these key ideas and page elements in mind:

  • Write copy that’s specific to your campaign, not broad generalizations about your organization as a whole.
  • Add a progress bar to show how close you are to reaching your campaign goal.
  • Add a countdown clock to visualize your campaign deadline and create urgency.
  • Avoid using videos. (If you don’t believe me, check out this experiment. Or this one. Or this one.)
Even small changes make a big impact on campaign pages

Optimizing your campaign donation pages can often come down to small copy variations and nuanced language. What may seem like an insignificant change to you may significantly impact the impression that your page makes on your potential donor.

In the experiment below, an organization tested a new headline on their campaign donation page. The change wasn’t drastic, but the impact was.

The original headline read, “You Make Kelly’s Website Possible,” emphasizing the organization’s broader cause—providing websites to keep family and friends connected for people going through a health crisis.

A new version of the page used a slightly different headline: “This Website Helps Kelly Stay Connected to Family and Friends.” The shift was subtle, but significant. For the new version, the emphasis shifted to the impact the donation had on the individual goal (human connection) rather than the organizational goal (websites).

The result? The more specific headline led to a 21.1% increase in donations.

Instant donation pages

The instant donation page is the least common donation page. In fact, it flies in the face of traditional thinking about online donor acquisition. 

Rather than trying to acquire a subscriber and then waiting months and months to cultivate them, the instant donation page focuses on converting new subscribers into donors right away.

Here’s how it works:

  1. Use a free offer (ebook, course, petition, etc.) to acquire an email address.
  2. Make a donation ask right away on your confirmation page.
  3. Make your donation ask in the context of the free offer they’ve just received.
  4. Include a donation form right on the confirmation page.

For example, one organization running Facebook Ads targeted likely supporters with a call-to-action of “Donate Now.” They conducted an A/B test with a version of the ad that offered a free online course. After enrolling in the course, the student was presented with an instant donation page.

The “Donate Now” ads saw an abysmal 0.46% click-through rate and brought in zero donations. On the other hand, the instant donation page model increased clicks by 209% and started converting new donors right away—at a 1.18% conversion rate with an average gift size of $58.33.

The conversion rate is low. But many organizations using this instant donation page model make back all of their advertising costs, plus some. While most organizations plan to spend money on acquisition, this model can help you recoup some of your costs, or even make money on acquisition.

2. Friction can be your biggest donation killer.

It’s impossible to remove every element of friction from your donation page. Friction may include:

  • Filling out form fields;
  • Errors on your page;
  • Confusing page layouts;
  • Unnecessary required fields.

Some elements of friction are always present. For instance, you can’t make an online donation without requiring payment info. 

But there are some elements of friction that you can reduce to create a better giving experience and increase the likelihood of someone making an online donation.

Field number friction

Field number friction is one of the most common barriers—too many fields, asking for unnecessary information, etc.

In the example below, you can see how too many fields make a donation form feel overwhelming and can cause a potential donor to abandon the process altogether.

A few common fields that we see on many donation pages are unnecessary to complete a donation:

  • Gift designations;
  • “Make this gift in memory of…”;
  • Titles (like Mr., Mrs., Ms., Dr., etc.).

Field number friction all comes down to perception. In many cases, you can keep the same number of fields but group them together in a logical fashion to make the page appear shorter.

A shorter form (usually) makes someone perceive donating as less work, even if it has the same fields.

Decision friction

Decision friction occurs when you ask a donor a question that they’re not informed enough to answer. Or, in some cases, decision friction can be caused by simply giving too many options for someone to choose from.

In the example below, you see one of the most common ways that decision friction shows up on the donation page: gift designation.

While there are many reasons why an organization may want each donor to designate how to spend their gift, most donors aren’t informed enough to know how to answer this question.

Easy solutions are to:

  • Not require a gift designation;
  • Default the gift designation field to “Where most needed”;
  • Remove the field on campaign donation pages.
Registration friction

Another common way we often make things more difficult is through registration friction. Registration friction occurs when you ask a donor to create an account or log in just to make a donation.

Logging in might make things easier for the organization in terms of data tracking and gift processing, but it makes the donation experience much more difficult and frustrating for the donor—and can lead them to abandon their donation.

3. But making it “easy” to donate doesn’t guarantee you’ll get more donations.

A common refrain in non-profit meeting rooms is, ”We just need to make it as easy as possible for someone to donate.”

While there’s an element of truth to that, removing all friction from the donation process can cause more harm than good. The common practice to “make things easier” can dilute the impact of the most important element on your donation page: your value proposition

If your only goal is to get people to the donation form faster, you won’t ask the donor to read about your organization before entering their payment info. But if you remove elements of your page that strengthen the reasons why someone should give to you, you risk losing donors.

An experiment in copy length

Fundraisers and nonprofit marketers often ask, “How long should my donation page copy be?” After running hundreds of A/B tests, we’ve learned that the length of your copy isn’t nearly as important as how effectively your copy communicates your value proposition.

In the experiment below, this organization had a short amount of copy on their original donation page. One might think this makes it “easier” for the donor because there’s less to read.

They created a new version of the page that added a considerable amount of copy. But the primary change was that the length of copy gave them more opportunity to explain why someone should donate.

The result? Adding more value-focused copy led to a 134% increase in donations, despite making the page significantly longer.

An experiment with “Donate” buttons

The donation shortcut button usually sits in the header on your donation page, anchored to the donation form at the bottom of the page. Functionally, when you click the button, it jumps you past all the copy and right to form.

The argument for these shortcut buttons seems sound: “If someone is ready to give, why slow them down by making them read a bunch of copy?”

In the experiment below, this organization added the button in hopes that it would lead to greater donations. The result? Allowing donors to bypass the copy by clicking the button led to a 28% decrease in donations.

Although the shortcut button made it “easier” to get right to the transaction, it made it harder for donors to understand the impact their donation would have.

Those results may not hold true for every site, but it’s a cautionary tale about blindly following “best practices” or focusing solely on making donating “easier.”

Conclusion

All of these tactics come down to a single skill, which is the one that all successful online fundraisers must develop: empathy.

If you can’t put yourself in the shoes of your donors and potential donors, you’re going to make decisions based on your personal preferences or your organization’s preferences. But in most cases, what fundraisers want and what donors want are very different.

Thankfully, testing and experimentation allows us to listen to donors and see exactly what works to inspire greater generosity—and leads to greater donations and revenue.

To increase conversions on your donation pages, consider:

  • Creating multiple donation pages to serve specific audiences;
  • Removing form fields that aren’t essential to complete a donation;
  • Expanding copy, if necessary, to communicate your value proposition more effectively.

The post Donation Pages: 3 Essential Ways to Improve Conversions appeared first on CXL.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many SaaS companies launch a product-led growth model—but never update it. When the executive team calls me and asks why they aren’t converting users into customers, I tell them to buy a plant. Seriously.

If they don’t water the plant, it’s going to wither and die. If they water it and give it sunlight, it’ll grow. Everyone knows how the system works. Yet, even though we know what to do, millions of plants still die. Why? Nobody takes ownership.

The first step to SaaS growth is to appoint a person or team to take ownership, then to give them the resources or time it takes to thrive. While this is true for all SaaS companies, it’s especially critical for those that use their product—not traditional marketing or sales—as their growth engine.

Once there’s internal ownership, you can establish and iterate on a process to execute your product-led strategy. How do you optimize that process? Time and again, I’ve seen the “Triple A” sprint framework drive exponential SaaS growth. 

What is the “Triple A” sprint framework?

The “Triple A” sprint focuses on rapidly identifying problems, building solutions, and measuring impact. The process follows a one-month sprint cycle to identify and deliver an improvement to your SaaS product.

The Triple A framework consists of three “A’s”:

  1. Analyze; 
  2. Ask; 
  3. Act. 

The Triple A sprint gives you a way to build a sustainable SaaS growth process and can be used by any team in your business. 

Still, if you have a bad product, no optimization will deliver rocketship growth. On the other hand, if you have a good product that customers love, you’ll see a monumental shift if you go through a Triple A sprint each month.

I’ve seen companies apply this same framework and go from $500,000 in annual recurring revenue (ARR) to $1 million ARR in less than 12 months. It works. Best of all, it’s not hard to implement. Start by analyzing your business.

The first “A”: Analyze

As Romain Lapeyre, CEO of Gorgias, states, “In order to build a growth machine for your business, you need to analyze your inputs and outputs.”

Until you know the inputs (e.g. trade shows, advertising, ) that drive desired outputs (e.g. ARR, customers, MRR), you won’t build a sustainable business.

If you’re not sure which inputs drive the outputs you want, start analyzing your business.

Create a recurring calendar notification to remind yourself to analyze your previous month’s results on the first workday of each new month. Block off one or two hours so that you’ll have the time to do a thorough job. You’ll get into a rhythm of analysis. 

Start by measuring your outputs. Outputs are a reliable indicator of whether you’re doing the right thing—they don’t lie. Let’s dive into the right outputs to track.

Which outputs should you track?

One of the beautiful things about a SaaS business is that you can analyze almost anything. This amount of insight is incredible—until it isn’t. With access to countless metrics, it’s easy to obsess over or bounce rates. Although these metrics can be tracked, they don’t tell you much. 

Did your high bounce rate lead customers to churn? Or did it hurt signups? Although a high bounce rate can absolutely contribute to those problems, we still don’t know the root cause.

By looking at outputs, we can quickly analyze the area of our business that most requires our attention. That way, we know which areas to troubleshoot.

In a product-led business, these are the macro outputs you need to track:

  • Number of signups;
  • Number of upgrades;
  • Average Revenue Per User (ARPU);
  • Customer Churn;
  • ARR;
  • Monthly recurring revenue (MRR).

These outputs don’t lie, and they’re easy to find. If you compare these outputs over the course of the last 12 months, you’ll quickly identify the area of your business that’s hurting the most. 

Once we know the outputs, we can ask questions to identify the inputs that get us closer to our dream business.

The second “A”: Ask

To optimize any business, you need to ask three questions: 

1. Where do you want to go?

Some businesses use a North Star Metric to symbolize this focus, while others pick a revenue number. How you break down your business goals is not what this post is about.

If you really have no idea what your organization’s goals are, you should read Measure What Matters by John Doerr. It lays the foundation for how to prioritize the metrics that matter in your business and hit them across your entire team.

As an example, let’s say we’re a $10 million ARR SaaS business that has a live-chat solution. Our numeric objective is to hit $15 million ARR in the next 12 months. I’m all for setting ambitious goals, but please do not just “wing it” when it comes to figuring out what to do next. You need to know which levers to pull.

2. Which levers can you pull to get there?

I’m taking a motorbike course. As a newbie, I’m constantly making mistakes. I’ll shift down a gear when going fast, and my bike will screech and hiss in anger. I’ll use the front brake while slowing around a corner, toppling my bike onto me—an anti-climatic end that risks embarrassment more than injury.

Knowing which levers to pull is important for Saas businesses or motorbikes. What’s also true is that there are multiple ways to get the same output. To stop a motorbike, you can use the front, back, or engine brake. Or just drive into the nearest lake. Each of these braking systems achieves the desired output.

It’s the same when it comes to your business. According to Jay Abraham’s multiplier theory, there are three levers you can pull for SaaS growth:

  1. Churn;
  2. ARPU;
  3. Number of customers.

When I talk to executives at product-led SaaS businesses, most focus almost exclusively on increasing the number of customers; however, when it comes to increasing ARPU or decreasing churn, I hear crickets.

This is a huge missed opportunity, according to Tomasz Tunguz:

(Image source)

Drew Sanocki, former CMO at Teamwork, found that decreasing his churn rate by 30%, increasing ARPU by 30%, and increasing total customers by only 30% increased lifetime value (LTV) by over 100%.

Breaking down your business by these three levers lets you quickly identify which ones will help your business grow fastest. Unless you’re just starting out, reducing churn and increasing ARPU will almost always have the biggest impact. Once you nail your churn and ARPU, you can start multiplying your business with each additional customer.

Want to see how it works? Create a chart like the one below in a spreadsheet to see which lever will have the biggest impact on your business. 

MetricScenario AScenario BDifference
Customer CountCurrent (e.g. 1,000)
0%
ARPUCurrent (e.g. 100)
0%
Annual Churn RateCurrent (e.g. 20%)
0%
ARRCurrent (e.g. $80,000)
0%

Once you’ve identified the top lever, it’s time to brainstorm which inputs will kick your business into high gear. 

3. Which inputs should you invest in?

Once you’ve identified the lever to focus on for your Triple A sprint, figure out which inputs will affect it. To help you find the right ones, look at the three most common reasons why businesses fail:

  1. You don’t understand your value.
  2. You aren’t communicating your value well enough.
  3. You aren’t delivering on your value fast enough.

Ask yourself: Which part of your business is underperforming? Brainstorm potential inputs to run experiments. This is easier said than done, but don’t overthink it. If you’re struggling with low signups, do customer research to understand the value your buyer perceives. Then, communicate that value to them. 

If you’re struggling with low upgrade rates, work on delivering your value. Cut out every piece of onboarding that doesn’t deliver value. As Samuel Hulick cautioned, “People don’t use software simply because they have tons of spare time and find clicking buttons enjoyable.”

One way to find opportunities to improve the buying experience is to buy your product once a month. You’ll quickly spot easy improvements. Too often, we set up our onboarding and assume it works without a hitch. (It doesn’t.)

I’ve done countless user onboarding audits and found embarrassing bugs that were cratering free-to-paid conversion rates. Anyone could’ve spotted these bugs. Compile a list of items that could improve your product experience. Filter these ideas. How you do it doesn’t matter as much as having a defined process.

How to prioritize inputs

As Scott Williamson, VP of Product Management at GitLab, implored, “Have a consistent prioritization system, so you can compare the value of very different projects, force priority decisions out into the light, and pressure test assumptions.”

I use an Input Log as a prioritization system. It helps you track and prioritize every idea that could help your business grow. Then, I use the ICE prioritization method, developed by Sean Ellis, to score each input on three elements:

  1. Impact. How big of an impact could this input have on an output I want to improve?
  2. Confidence. How confident am I that this input will improve my output metrics?
  3. Ease. How easy is it to implement?

Here’s an example of what this could look like:

InputsImpactConfidenceEaseICE Score
Because we noticed quite a few customers having problems upgrading, we expect that adding an “Upgrade Now” button to the header of our in-app experience will make it easier for users to upgrade their account. We’ll measure this by monitoring if the signup-to-paid conversion rate improves.55313

You can use any framework you want; however, if you don’t have an existing prioritization system, start with the ICE score framework. It’s easy to understand and implement.

Once you’ve run through the ICE method to filter your ideas, find the one or two opportunities to implement that will have the biggest impact on your business. Now, it’s time to act.

The third “A”: Act 

Ideas are easy. Execution is everything. As Henry David Thoreau said, “It’s not enough to be busy, so are the ants. The question is, are we busy doing the right things?” 

Once you’ve chosen the one or two ideas you’re going to implement this month, launch the idea. Depending on the ease of each project, this could take you and your team a few hours or a few weeks. 

If this is your first time going through a Triple A sprint, start small. Get some quick wins under your belt. Typically, this means choosing an input that is easy to implement and has a moderate-to-high estimated impact. Later, you can take bigger swings that require more resources and time.

Kieran Flanagan, VP of Marketing at HubSpot, took a similar approach when helping HubSpot transition from a sales-led to a product-led business:

Here’s the high-level process that worked for our growth team:

Get wins on the board to build trust with leadership and other teams, such as product and engineering.

Prioritize growth experiments you can execute quickly to demonstrate results. 

Once you start to see a high-level of test failures or non-results, move on to tackle more complex growth opportunities (take big swings).

Eventually, tell your CEO you want to test pricing ;-) (take even bigger swings)

If you already work in growth, this process of getting quick wins and laddering up should be familiar.

In aggregate, even small wins can become big wins. If initial growth is incremental, the pattern of success can earn buy-in for your more ambitious ideas. 

Conclusion

Process beats tactics. Following the Triple A sprint framework puts you on track to grow your SaaS business consistently:

  • Analyze your business and key metrics.
  • Ask where you want to go and how you can get there.
  • Act on those insights, starting with small wins.

In a market where, over the last five years, customer acquisition costs have increased more than 50% while willingness to pay is down 30%, we need to instill a culture of optimization.

If we can, we’ll be able to pull the right levers and put our business in high gear.

 

This post was adapted from a chapter in Product-Led Growth. Wes Bush also teaches our course on product-led growth.

The post SaaS Growth: The “Triple A” Sprint Framework that Gets Results appeared first on CXL.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview