Loading...

Follow CXL | Conversion Optimization Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Which popular beauty and cosmetics website has the best user experience?

This is a conversion-focused benchmark analysis of four competing beauty and cosmetics websites:

We selected sites from among the top 25 in their category, according to Alexa rankings. Our analyses measure overall user experience of each site as a composite of 5 separate UX dimensions.

This post details our process, the results, and the takeaways for businesses in this and similar verticals.

What is competitive UX benchmarking and why does it matter?

Generally speaking, benchmarking is a way to discover the best performance one can achieve. Benchmarking is widely used to test and compare organizations and products in a particular industry.

Competitive UX benchmarking tests aspects of a website and compares them to competitors. The testing is done manually, by real people, who are specially recruited for this purpose.

If you’re optimizing a website to improve user experience, affect product or brand positioning, or generally to change user behavior, Competitive UX benchmarking helps you to understand how your website is perceived among the sea of options available to consumers.

5 dimensions of UX benchmarking
  1. Appearance. How does the site’s look-and-feel compare to its competitors?
  2. Clarity. How do users perceive the value and benefits compared to its competitors?
  3. Credibility. Do users trust the site more or less than those of competitors?
  4. Loyalty. Are users more likely to return to a site or to its competitors?
  5. Usability. Do users think the site is more or less usable?

Benefits of UX benchmarking

When your revenue stream depends on your website, you need to be able to answer two questions:

  • Is your UX putting you behind your competition? If so, where?
  • How and where should you prioritize your website testing?

UX benchmarking helps to identify the areas where the user experience could be improved:

  • Is the message clear enough?
  • How do users perceive clarity compared to general usability?
  • Are users able to find what they are looking for?
  • Is credibility an issue?

(CXL Agency regularly performs these analyses for clients, which help prioritize issues and develop testing hypotheses.)

Show me the Results

Our UX Benchmarking Method

The CXL benchmarking methodology has been developed during a series of studies conducted by our CXL Institute UX Research Team in collaboration with the CXL Agency.

We first teamed up with Jeff Sauro and his team at MeasuringU to benchmark 5 bicycle websites. We later modified the methods to include conversion-focused metrics to benchmark 5 nutrition websites, plus many many more as the basis for our ecommerce best practice guidelines.

This study is in the same vein of work published by TryMyUI and Userzoom. The CXL methodology is described in more detail on our Competitive UX benchmarking page.

Competitive UX benchmarking process

CXL benchmarking uses a standardized process to calculate a conversion-focused, validated, and standardized survey metric.

We ask participants to perform concrete tasks on a target website and collect feedback about their experience immediately after task completion.

Tests are remote and unmoderated; participants perform tasks on their own mobile device. Each person is asked to test all the websites under comparison (within-subjects study).

For this study, we recruited and surveyed 108 people (55 men, 53 women).

The participants were asked to browse the test website as if they were shopping for, in this case, beauty items.

Testing the UX

Here is one of the tasks the participants were asked to perform:

  1. Find a lipstick for $25 or less.
  2. Once found, compare it to similar ones and choose the one you would like to buy. Add it to the shopping cart.
  3. Imagine you want to buy something as a gift for your friend. Find an item you think they would like and add it to the shopping cart.
  4. Go to your cart and complete the purchase.
    Credit Card#: 1111 2222 3333 4444 CVV: 111 Expiration Date: 06/01/2017.
  5. Your task will end when you see an error message after submitting the proxy credit card information.

The task completion was confirmed via URL validation.

After testing, we administered a survey to each participant.

Collecting user feedback

In our standard benchmarking process, the survey is composed of statements and questions covering the following aspects:

  1. Appearance
    • I found the website to be attractive.
    • The website has a clean and simple presentation.
  2. Clarity (our additional question, not part of SUPR-Q)
    • I clearly understand why I should buy from this website instead of its competitors.
  3. Credibility (Trust)
    • I feel comfortable purchasing from this website.
    • I feel confident conducting business with this website.
  4. Loyalty
    • How likely are you to recommend this website to a friend or colleague?
    • I will likely visit this website in the future.
  5. Usability
    • The website is easy to use.
    • It is easy to navigate within the website.

Participants evaluated each statement or question on a 10-point Likert scale.

Additionally, the survey contained two open-ended questions:

  • Was there anything that frustrated you about your experience on the website you performed the task on?
  • What did you like about the website experience?

From the open-ended questions, we extracted specific feedback to formulate hypotheses on how to improve the websites.

Why we chose this process
  • Conversion-focused: Developed with a UX dimension quantifying user’s perception of a website’s value proposition, or why they should consider a site compared to its competition;
  • Quantitative: based on 100+ user data points for each site;
  • Generalizable and transferable: applicable to any website—ideal for relative context and understanding how scores relate to each other when measuring before-and-after design changes, or comparisons to competitors;
  • Multidimensional: includes the main factors for measuring website user experience and the general quality of a website;
  • Standardized, normalized, and validated: developed through extensive testing on a massive user-testing database (see the peer-reviewed paper on its foundation);
  • Repeatable: ideal for quantifying a baseline for comparison against later design changes.
Understanding the data

With the data from the survey questions across participants, we calculate percentile ranking for each website on each UX dimension subcomponent and for a global metric.

Note: Percentile rankings are not only relative to each other, but also to the 84 sites currently in our database. The figures below represent sample visualizations of the data.

The battle results 1. Appearance How does the site’s look and feel compare to its competitors?

The statements we used to evaluate Appearance were:

  • I found this website attractive.
  • The website had a clean and simple presentation.
Takeaways

All sites scored above average compared to other sites in our database. This suggests a higher barrier-to-entry or increased expectations among users in this space.

Sub-dimensions and insights

Based on the evaluative statements for each category, we can infer sub-dimensions in the form of questions. Qualitative feedback offers common descriptions that help answer each:

1.1. What makes a beauty or cosmetics website attractive?

Bold, high-contrast colors and large, high-resolution product images:

I loved their overlay and their packaging. The items looked gorgeous. (Sephora)

The website was easy to navigate and visually appealing. Looked high-end and was presented nicely. (Fresh)

1.2. What makes a beauty or cosmetics website unattractive?

Cluttered, noisy, distracting design:

Too many adds, pop-ups are terrible, too much marketing while I was trying to search for something else. (Lush)

It seemed cluttered. (Lush)

Wasn’t as responsive and didn’t seem as professional and polished. (Lush)

1.3. What makes a beauty or cosmetics website clean and simple?

Relevant filters, intuitive categories:

There’s a lot more selection and you can filter the price range and sort by price in a meaningful way. (Sephora)

Easy and simple to figure out, clear categories. (Sephora)

1.4. What makes it cluttered and confusing?

Poor UX design:

I thought the graphics on the front page were a little overwhelming. The filter by price was not out in front. (Clinique)

It was not scaled to a page. You had to scroll. There was too much going on during checkout. (Clinique)

2. Clarity How do users perceive the site’s value & benefits compared to its competitors?

The statement we used to evaluate Clarity was:

  • I clearly understand why I should buy from this website instead of its competitors.
Takeaways

All websites are highly visual, featuring stunning images and very little text. They are perceived as exceptionally clear.

Sub-dimensions and insights

2.1. How does a website show that it is the best place to buy beauty products?

Simplicity, value, quality:

The website is beautifully designed. You can see exactly what the colors are of any makeup that you’re interested in purchasing. It’s fun to look at the various products. (Sephora)

I loved their overlay and their packaging. The items looked gorgeous. (Sephora)

Website looked nice and I felt like the products were top quality. (Sephora)

3. Usability Do users think the site is more or less usable compared to competitors?

The statements we used to evaluate Usability were:

  • The website is easy to use.
  • It is easy to navigate within the website.
Takeaways

Getting usability right appears to be a major challenge. Three out of four among the tested mobile sites achieved their lowest score on this dimension. While Sephora and Fresh perform above average on Usability, Lush is clearly not on par.

Sub-dimensions and insights

3.1. What makes a beauty and cosmetics website easy to use?

Ease of use, speed, good filtering/sorting:

It was very easy to search and was also very well organized. (Fresh)

It was easy to search and check out. (Fresh)

Very easy to navigate. (Fresh)

The website was very fast and easy to use. (Sephora)

3.2. What makes it hard to use?

Slow to load, poor navigation, filtering, or sorting:

Search bar was hard to find, checkout didn’t let me use my quick fill-in feature. (Lush)

It was pretty hard to search around. It kept taking me back to the home page. (Lush)

The whole website didn’t fit on my phone’s screen. I had to scroll over to the right to see the button for my cart. (Lush)

3.3. What makes a beauty and cosmetics website easy to navigate?

Helpful categories, filters, and an easy checkout:

I liked that when I pushed ‘add to bag’ it automatically took me to my bag and I didn’t need to navigate to find it to check out. (Clinique)

The navigation was great. It was set up in broad categories that you could explore. Very nice looking and functional website. (Lush)

3.4. What makes navigation difficult?

Misleading, not intuitive:

I didn’t like it that the first thing was an email signup when I thought it was a registration sign up. I also didn’t like all the various offers coming all over the place. I would prefer a separate tab for “your offers.” where I could get samples or other points and things like that. The offers got in the way of a clean shopping and checkout process and made it a slight hassle. (Clinique)

4. Credibility Do users trust the site more or less than the competitors?

The statements we used to evaluate Credibility were:

  • I feel comfortable purchasing from this website.
  • I feel confident conducting business with this website.
Takeaways

Three of the four websites are perceived as highly credible. Lush lags behind, scoring slightly below average.

Sub-dimensions and insights

4.1. What makes cosmetics shoppers feel comfortable buying from this site?

Fast, professional, easy to navigate, attractive:

It was very well designed. I like the addition of adding pictures and making it feel more homely. It was also pretty easy to navigate. (Fresh)

4.2. What keeps them from feeling comfortable?

Slow speed, too many popups, cluttered design, unclear options in cart or at checkout:

Most of the website was very cluttered and felt very chaotic. It was very hard to navigate compared to the other websites. (Clinique)

There were pop up ads that were annoying. (Clinique)

I didn’t like that the lipsticks didn’t show the packaging but only the lip smears. (Lush)

I don’t like the colors and they asked more information like my phone number. (Lush)

4.3. What gives cosmetics shoppers the confidence to buy from this site?

Ease of use, transparency on prices:

There’s a lot more selection and you can use the filter for price range and sorting for price in a meaningful way. (Sephora)

4.4. What undermines their confidence?

Marketing messages, interruptions:

Too many adds, pop-ups are terrible, too much marketing while I was trying to search for something else. (Lush)

5. Loyalty Are users more likely to return to the site or to your competitors? (NPS included)

The statements we used to evaluate Loyalty were:

  • How likely are you to recommend this website to a friend or colleague?
  • I will likely visit this website in the future.
Takeaways

Loyalty is the UX dimension that separated these sites the most. The four websites show loyalty results spread all over the upper 50% of the bell curve. Lush and Fresh trail their competitors on this UX dimension.

Sub-dimensions and insights

5.1. Why would cosmetics shoppers recommend a beauty site to a friend?

Ease of use and product discoverability:

The pull-down menus from the top was elegant looking but not fussy. Loaded quick and to where I wanted. (Sephora)

I liked the ease of locating everything and quickly limiting it to just my needs. (Sephora)

5.2. What would keep them from recommending it?

Hard-to-find products, slow, confusing:

Search bar was hard to find, checkout didn’t let me use my quick fill out feature. (Lush)

While they had a sort ability, they didn’t have a true filter option. (Lush)

5.3. Which aspects of a beauty site bring users back again and again?

Ease of use and product discoverability:

There was nothing that I found to be frustrating. I’m a long time Lush customer so I regularly buy products from them online. I like their website.  (Lush)

5.4. Which aspects keep them from coming back?

Low-end design, hard-to-find products:

Wasn’t as responsive and didn’t seem as professional and polished. (Lush)

The categories weren’t as nice as the very first site. It felt like I had less choice here and the most obvious products were only the bestsellers.  (Clinique)

GLOBAL UX – Who Won?

After this analysis, we can declare a clear winner among the 4 websites we tested: Sephora. Sephora stood out on all five dimensions we tested. Clinique and Fresh have room for improvement, while Lush trails its three competitors.

Download the Full BeautyUX Report here. Conclusion

Comparing two different “objects” at glance is difficult. Whether these objects are companies, websites, products, or services, UX benchmarking offers a rigorous comparison and quantitative results.

This benchmarking study compared four stunning websites—all scored high on Appearance and Clarity. Yet, despite their looks, we could separate them based on usability dimensions, and a clear winner emerged.

Benchmarking is a starting point to understand what you are doing..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

U.S. companies spend billions on training each year. What about marketing departments? How much do they spend? What are they getting out of it? And what are they struggling to solve?

We surveyed 462 marketing leaders—CMOs, VPs of Marketing, Marketing Directors—to find out.

Respondents completed a 10-question survey that covered:

  • The perceived skill level of marketing teams.
  • Processes and accountability for training marketers.
  • Budgets for marketing training.
  • Primary challenges when upskilling marketing teams.

Here are seven key takeaways. If you want the full report, download the PDF.

7 things we learned 1. Bigger companies feel better about their skills.

Do larger organizations have greater in-house marketing skills? They certainly think so. Companies with 1,000–5,000 employees have the highest average estimate of marketing team skills.

Overall, the larger the organization, the higher the perceived skill:

It’s easy to speculate why this may be the case:

  • Larger organizations are more likely to have employees with specialized training and experience compared to smaller organizations, which may feel the lack of deep expertise acutely for some marketing challenges.
  • Larger companies have deeper pockets to afford top talent.
2. Training budgets aren’t big, but they’re getting bigger.

Each year, the average company spends $994 per employee on training. More than half, however, spend significantly less, if anything:

  • 61.9% spend $500 or less.
  • Nearly 1 in 5 spends nothing at all.
Note: We asked respondents for the training budget on a per-employee basis; some responses clearly provided a total training budget (e.g. $1 million). We omitted responses we believed did not reflect a per-employee budget.

Larger organizations tend to spend more—62.5% more than small businesses and 16.1% more than medium-sized companies:

Our current career development budget for our 60-person agency is $1,000/person/year. We find this is the right budget to get them to a solid conference in the area, or do an online course if they prefer that.

Ross Hudgens, Founder and CEO of Siege Media

For almost half of all respondents, budgets are getting bigger. Only 1 in 16 companies is reducing its upskilling budget in 2019:

Most of that added investment is coming from large enterprises. Nearly two-thirds (66%) of enterprise organizations plan to spend more on training in 2019 than they did last year:

Does more money translate into a higher perception of skill? Not really. There was only a weak correlation (0.20) between the amount spent on training and the perceived skill of the marketing team. Why?

  • Small companies may have urgent needs that siphon off resources from long-term investments in training. Individual practitioners at those organizations may spend more time learning through experience.
  • Teams that overestimate their skills—those, perhaps, that are convinced every member is a “10”—may not think there’s anything left to learn.

Being a successful marketer [. . .] demands a solid grasp of a multitude of different disciplines. Acquiring all these skills can seem overwhelming, unless you get the proper training. If you ask me, upskilling your marketing team is one of the best investments your company will ever make. 

Michael Aagaard

So what does correlate with strong marketing skills? The amount of business you do online.

3. Companies that live online have better marketing skills.

SaaS, B2B, and ecommerce organizations had the highest perceived marketing skills:

Government and non-profit sectors trailed for-profit businesses. (The sector listed a “limited budget” as their primary training challenge.)

It’s easy and common for even the most experienced marketers to gloss over the basics and lose touch with the fundamentals of good marketing. A good training course is rooted in these principles, even if the topic is more tactical or execution in nature.

Hana Abaza, Director of Marketing at Shopify Plus

Earning more resources often requires demonstrating ROI, something far easier for online companies that can tie marketing efforts and revenue tightly together.

The three industries with the highest skill levels were also near the top when it came to having a “clear, structured process” for training. Surprisingly, agencies trailed all industries: Only 2 in every 5 has defined a process:

That means that roughly…

  • 3 of every 5 agencies
  • 1 of every 3 B2B, SaaS, or travel companies
  • And half of all ecommerce, media, non-profits, and governmental organizations

…are struggling to create a structured training program. So who’s responsible for those programs?

4. Direct managers own marketing training.

In 2 of every 5 companies surveyed, direct managers are accountable for training their teams. In nearly three-quarters (74.7%) of all businesses, either the direct manager or a marketing leader—who, in some instances, is also the direct manager—is accountable for the skills of their team:

Only 5% of organizations reported having no accountability, although an additional 14.3% reported autonomy (anarchy?) when it came to training.

Accountability was especially lacking in small organizations.

5. Autonomy at small companies applies to training, too.

Of the 89 respondents who stated that there was no formal oversight of training, a disproportionately large share came from small businesses:

“The best thing you can possibly do for your career is also one of the easiest things [. . .] which is spend the time to learn something. If you spend an hour every single day—or 30 minutes or maybe even 25 minutes—you’re going to be leaps and bounds ahead of your competition in two years.”

Chad Sanderson, Microsoft

Still, about two-thirds of small-business respondents (67.8%) had some oversight of skill development programs. The management challenges, however, extend beyond top-level accountability.

6. A structured process doesn’t guarantee training success.

Having a “structured process” for training programs isn’t the only organizational challenge, as marketers made clear in their open-ended responses:

  • By the time everyone gets trained, the knowledge may be outdated. Marketing leaders highlighted the challenge of identifying what their teams needed to learn next, or how to find a program that could help future-proof their department.
  • CMOs and VPs of marketing also struggled to measure whether training knowledge translated into more profitable marketing strategies—or to test retention months down the road.

We have our own internal academy with our own execution recipes run by our Director of Training. The most important part of their job isn’t to add new content [. . .] but to actually audit the students and make sure they know everything we teach as second nature. You’d be shocked at the lack of knowledge retention unless you make sure you audit, and audit repeatedly—we do every three months.

Johnathan Dane, Founder and CEO of KlientBoost

The other major challenge? Finding the best employees and getting them to stick around.

7. Finding talent that wants to learn—and keeping them—is hard.

See if this sounds familiar:

  • You struggle to find capable employees who are willing to learn new skills. Highly trained employees are often out of your price range, and undertrained employees need a structured, well-funded program to progress.
  • You worry that highly trained employees will jump ship after you invest in their development. Training makes your employees more productive—and more attractive to companies trying to lure them away.

When you invest in your teams, you don’t just build loyalty and engagement—you build a force that grows with your business, stays ahead of the market, and seizes opportunities that keep you ahead of your competition.

Ryan Engley, VP of Product Marketing at Unbounce

Other HR challenges, many respondents told us, ranged from long-tenured employees unwilling to adapt to fresh college grads without technical skills.

What else is there to learn? Download the PDF with all the details from the 2019 State of Marketing Training survey.

The post The State of Marketing Training in 2019 [Original Research] appeared first on CXL.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In my daily work with ecommerce brands, I see two types of companies:

  1. The first type focuses on acquisition and conversion.
  2. The second relies on retention.

The second type is winning. Why?

Overall acquisition costs for both B2C and B2B have gone up by 50% in the past five years. Sooner or later, relying on new customers will break you. To offset these costs, you need to earn more repeat purchases from existing customers.

Repeat purchases are often cheaper because people already know the brand. As a result:

  • They’re converted via email.
  • They’re converted via organic or direct traffic.

Thus, with the same ad budget, you get more orders—three times more, according to research.  That means better margins, more profitability, and cost-efficient scaling.

This post gives you a data-backed approach to win more repeat sales. The key is to identify and persuade your most valuable cohorts.

Why to monitor the post-purchase experience by cohort In Google Analytics, the Cohort Analysis Report is time-based, though you can apply a segment to a cohort. Cohorts vs. segments: What’s the difference?

In marketing, cohorts are groups of customers who exhibit similar behaviors during a certain time span (e.g. buyers during a holiday promotion). Segments include any “subset of your Analytics data” (e.g. mobile purchasers).

As Alistair Croll and Benjamin Yoskovitz detail in their book, Lean Analytics, cohort analysis has special relevance for the customer lifecycle, enabling marketers

to see patterns clearly against the lifecycle of a customer, rather than slicing across all customers blindly without accounting for the natural cycle a customer undergoes. Cohort analysis can be done for revenue, churn, viral word of mouth, support costs, or any other metric you care about.

This post uses the broader definition of “cohort”—“a group of persons sharing a particular statistical or demographic characteristic”—to avoid toggling between “cohort” and “segment,” even though some “cohorts” listed below are not explicitly time-bound.

Benefits of cohort analysis

When it comes to increasing repeat purchases, cohort analysis provides three key insights:

  1. The post-purchase behavior is directly influenced by the initial experience, so the motivation for the first order—incentive, timing, product, exposure—is a strong unifying factor for the cohort.
  2. Also, cohort behavior often reflects the use of the product over time. For replenishable products, if most people place their next order on the fifth week, that’s when they run out of it. Or for products like clothing, shopping every three months suggests when people get ready for the new season. Such details help identify when it makes sense to push marketing and when there’s no point.
  3. Lastly, cohorts provide buying behavior insights to adjust your marketing so every customer feels the communication is personal. Since one customer inevitably falls into more than one cohort, the analysis gives meaning to their lifecycle behavior from different perspectives—the product they bought, the campaign that converted them, when they ordered, and so on.

So which are most important?

Which cohorts to monitor

You can probably come up with dozens of characteristics by which to segment your customer base, but start with these five:

  1. By first product bought. The first item the customer purchases determines all next interactions with your brand.
  2. By month of first order. The temporal tipping point signals a customer’s motivation for buying.
  3. By campaign of first order. The most telling cohort, perhaps, is which promotion locked in the sale.
  4. By coupon used at first order. The exact coupon code that converted people speaks volumes about the kind of promotions that influence their decision to buy.
  5. By traffic source. Where customers come from to shop for the first time may influence their behavior.

The behavior of each cohort helps you identify which customers have the greatest influence on profitability.

How repeat purchases influence profitability

Without cohort analysis, it’s hard to make connections between customers and profitability, especially at scale. Cohort analysis consolidates the data from individual customers into meaningful bundles from which you can draw conclusions.

Each cohort has a distinct financial performance. Usually, there are one or two strong cohorts with lots of high-value customers that drive profitability (while dragging less-profitable customers behind). The good cohorts repay their Customer Acquisition Costs (CAC) quickly and bring higher margins over a long period, accumulating the lion’s share of returns.

Cohort metrics can help drive more repeat customers.

Three characteristics help identify the most valuable cohorts:

Average order value (AOV). A larger AOV can make up for a shorter customer lifespan. If you’re looking to boost short-term results to placate investors or expand into a new market—and paid acquisition is a must—a focus on AOV can earn more revenue for the same CAC (and, often, the same shipping fees). Increasing the AOV boosts the margin on individual orders, especially if one of the products is more profitable than others.

Number of orders per customer. This metric reveals the longer-term relationship. Some product categories don’t drive many repeat orders. For those that do, each consecutive order comes at a lower CAC (or even free). Even if the first sale is at no margin—as often happens in competitive niches—the next ones offset the loss. With a strategy in place to boost repeat purchases, occasional spikes in CACs will be more manageable.

Lifetime value (LTV). LTV reflects both the average order value and the number of orders per customer. High LTV numbers mean your brand enjoys true customer loyalty (not to mention they are highly appreciated by VCs in case you seek funding). Companies with a high LTV have a strong brand image, earn word-of-mouth and organic referrals, and enjoy “search monopolies”—conversions by brand name search.

Once you gain visibility into the LTV of each cohort, you’ll begin to learn a lot about your customers.

Profitability is a balance between CAC and LTV. (Image source)
What you’ll learn about your customers

Before you start increasing repeat purchases, you have to find the gaps and opportunities in the data. Along the way, you’ll learn a lot about the post-purchase experience consumers have with your brand.

1. Get a complete view of the customer lifecycle

To improve the customer experience and drive more repeat sales, you want to see how repeat purchases are already happening naturally:

  • When do the second, third, or fourth orders happen in your store?
  • When do most people in the cohort reach the end of their customer lifecycle?

Mapping the entire customer lifecycle reveals connections between marketing, sales, and the customer. You gain a better understanding of how often customers need your products, which is a good starting point for more personalized and better-timed email marketing, rather than the popular blanket approach that constantly bombards customers with promo emails.

2. Understand purchasing habits

It’s not just about what happens over time, but also how:

  • How much do people spend at each consecutive order?
  • Does this amount change over time?
  • Do they buy a lot in a few orders or place numerous small-amount orders?

Some cohorts will accumulate huge lifetime values over just six weeks; others will spread out equal-order values over long, long periods. Knowing such details about your customers’ spending allows you to tailor your communication to stimulate the desired behaviors—like rewarding repurchases or prolonging the customer lifetime.

The key is to take what they already do and gently nudge them to do it more often. Such marketing is non-intrusive and better fits consumers’ needs.

3. Find out what drives customer loyalty

As mentioned above, the first order—which product a consumer purchased and the motivations that led to that purchase—shapes the rest of the journey.

Products

Unfortunately, not all products bring the same level of satisfaction, and some drive customers away. In categories like food, clothing, and beauty, look for products with mostly one-time buyers and low repeat rates.

Of course, this won’t work in other categories—like bikes or baby gear—that people typically buy once. To measure the performance of those products, I suggest throwing some add-ons in your mix (e.g. maintenance, upgrades) to drive repeat purchases and assess satisfaction with the initial purchase.

Comparing product cohorts can help identify the most profitable segments of your customer base.

Detecting unsatisfactory items as soon as possible might save your reputation, marketing budget, and customers. Drop sub-par products and concentrate on the ones that consistently bring repeat purchases and good reviews.

We had a client selling personalized jewelry who discovered through a retention analysis that 40% of his products didn’t drive any repeat purchases. He scrapped them from his store, giving more visibility to the others that worked well.

When you know exactly which products people use, the lifecycle will show you how often they need complementary products (e.g. filters, cleaning supplies), how long before they’ll replace it, or when they’ll stop using it (i.e. when the purchases of complementary products stop).

Some products have clearly defined periods of use. If you sell diapers, for example, you can estimate quite accurately the needs of first-time buyers: They’ll probably stay with your shop for three years, at best. So when you notice outliers—with a longer lifespan, for example—you can test a bundle offer as they may have kids at different ages.

Marketing strategies

Some changes you make will influence conversions at the specific time you implement them. So time-based cohorts (by time of first order) are closely connected to what was happening in your store and on your site at that time.

Maybe last month you changed how you share social media content and attracted new followers. Or you lowered shipping costs in a country and won new business. Or, perhaps, you added a few colors to your product line. Maybe your search rankings improved, making it easier for potential buyers to find your site.

Any of these changes can bring an influx of better-fit customers with a higher repurchase rate. Monitoring how people who bought in the same month behave afterward sheds light on the long-term effect of those adjustments.

Marketing campaigns and promotions

Knowing how each promotion works in the long run solves many digital marketing mysteries:

  • Where you should put your money;
  • How to formulate a campaign to attract the desired people and behavior;
  • Which channels are home to long-term customers, and so on.

Where loyal customers come from is important for social media management, ad placement, partnerships, affiliate links, media features, etc. For more post-purchase conversions, you’d better double down on the ones that bring strong cohorts with high retention rates.

A retention analysis of coupon codes reveals a lot about your customers. Not just the revenue a coupon brings, but which messaging and discounts works, where it should be placed, whom to target, which products to include, etc. People in the “Last chance, 70% discount” cohort will have completely different behavior than the ones in the “New Collection, Early Bird” cohort.

Now it’s time to apply your knowledge—and win more repeat sales.

How to drive repeat purchases by cohort

Our client, Barrington Coffee Roasting Co., had a problem: tons of traffic but no conversions. They analyzed their traffic and retention and realized that channels like Google Ads brought low-quality traffic and little brand loyalty. They shifted their efforts to review sites that did bring quality traffic, and ROI increased with unchanged budgets thanks to repeat purchases.

Data-driven marketing tailors marketing campaigns to consumer behavior. Cohorts offer valuable context and suggest which behaviors to target. You’re able to time your marketing messaging more closely to the purchasing cycle, drive engagement in a way that feels natural and useful (instead of salesy and pushy), and focus your marketing money in the channels that matter.

Here’s how to do it for four key cohorts.

By first product bought

This segmentation gives plenty of opportunities for ongoing engagement.

  1. Ask for feedback after a sensible period of time (so the customer has time to test the product). You can automate this step, but try to have customer service reps reply personally to positive and negative reviews (and, in the case of the latter, fix the problem).
  2. Give usage tips to encourage more frequent use. The more people use the product, the more involved they are with your brand.
  3. Create a separate social media group for users of various products and invite them to join a relevant one. You’ll be able to craft specific content to share with each group, and user-generated content will help foster a real community.
  4. Offer add-ons when you see a drop in the cohort repurchasing rate. If fewer and fewer people are placing a third order, for example, that’s when you should offer an accessory, an upgrade, or a replacement of the old item at a special price.
  5. Send just-in-time emails for replenishable products like cosmetics, food, or anything else that needs to be replaced. Time communications to arrive before consumers run out. Use the average time between orders as your guide the first few times. Then, you can sub-segment each cohort even deeper and send almost one-on-one emails to match individual repurchase times. This is convenient for consumers and keeps you top-of-mind at the perfect moment.
  6. Offer recycling or replacement service at the end of the estimated product lifecycle. The Department of Planning, Transport and Infrastructure in South Australia has had sweeping success the past two years with their OLD4NEW campaign to replace life jackets on boats. They issue vouchers for new and safer replacements to be redeemed at participating stores—bringing extra sales to stores while doing public good.

The Australian government timed a voucher campaign to incentivize boat companies to upgrade life jackets—and boost sales at retailers. (Image source)

By month of first order

The time of the first purchase can give you clues on how to engage afterward.

  1. For holiday shoppers, one of nearly every store’s biggest cohorts, when are the next orders placed? Look for changing seasons and major holidays. Those shopper may, more often than not, be shopping for gifts and incentivized by special offers. Sending those offers proactively (e.g. two weeks before Mother’s Day) will be well received.
  2. Look at the timing of campaigns. What was the main initiative during the months when you attracted your best cohorts? Keep the cohorts engaged with content related to the initial campaign.
By campaign of first order

The campaign that brought in a new group of customers says a lot about their motivations and shopping habits.

  1. Which campaigns brought in the most loyal customers? What was the message? Replicate those to try to attract as many similar customers as possible. Give your most loyal customers early bird offers, limited edition items, and premium service.
  2. Maintain the lightmotif for niche campaigns—keep the post-purchase experience consistent to preserve the connection. This may even mean creating a different style of communication for niche cohorts.
  3. Don’t expect high customer retention in cohorts from deep-discount campaigns. Those customers seeks deals and are not brand loyal, so they can be stimulated to buy again only with more discounts. The good thing is that you can do it for the deal-hunting cohorts only and not eat your margins with the rest.
  4. Products included in a promotion also signal customer’s preferences. Some want only the newest models and others shop for clearance items. Effective marketing is to give each their own and not waste budget or effort on trying to change behaviors.
By traffic source

Where your loyal customer came from informs the place for post-purchase engagement, too. If they trust social networks, blogs, magazines, etc., enough to buy from links or ads in those sources, they’re more likely to do so again.

  1. If long-tail keyword searches attract high-value cohorts, optimize your site them.
  2. If direct traffic doesn’t increase customer loyalty, work on your reputation and brand image. People obviously type it in with the intention to buy but, for some reason, are disappointed.
  3. For cohorts coming from social media, tailor the content to their product preferences.
  4. For cohorts from referral sources, like an affiliate link, measure their lifetime ROI against the cost. Then, monitor behavior for drops in sales and work with the referrer to increase engagement.
What to do with cohorts that don’t buy again

Some cohorts will not become repeat purchasers. Their lifecycle map is a dead end. Here are a few tactics to turn that around:

  1. Remove products leading to one-time purchases. By now, you should know which products fail to stimulate loyalty (or stimulate below-average loyalty). Consider dropping products if they don’t lead customers down the path you want them to take.
  2. If a cohort looks dead but AOV or LTV are quite good—accumulated in a few big orders, perhaps—ask for feedback. You’ll learn what it may take to reactivate a potentially lucrative cohort.
  3. Try to replicate the customer engagement of more successful cohorts. Proactively reach out via email around the time for reorder (average time between orders) to activate the inactive cohort.
Ecommerce analytics tools to help with cohort analysis

While some level of cohort analysis is possible in Google Analytics or with (clunky) spreadsheet calculations, ecommerce analytics tools provide cohort data that expedites and deepens analysis.

G2 Crowd lists more than two dozen providers in the ecommerce analytics category. For cohort analysis, there are five primary providers, listed alphabetically below:

  1. Custora
  2. Glew.io
  3. Kissmetrics
  4. Metorik
  5. Metrilo
Conclusion

Cohort or segment analysis is one way to track and analyze buyer behavior and its effects on your ecommerce business over time. This kind of segmentation sheds light on how a common first-order trait influences the customer journey and deepens understanding of buyers’ needs.

When you routinely perform cohort analysis, more opportunities to drive repeat purchases appear. You’ll be able to ditch the blanket approach and build a more meaningful relationship with each customer. Relevant offers and adequate timing turn marketing from an intrusion into a useful interaction.

Best of all, in my opinion, is that the retention-heavy strategy comes at practically no cost—it continually brings in more sales without pouring more money into marketing.

The post Increase Repeat Purchases with Cohort Analysis appeared first on

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The classic graph for the product lifecycle is a sales curve that progresses through stages:

  • a sharp rise from the x-axis as a product transitions from Introduction to the Growth phase;
  • a sustained, rounded peak in Maturity;
  • and a gradual Decline that portends its withdrawal from the market.

Each stage of the product lifecycle has implications for marketing. But an MBA-friendly curve rarely translates to reality. The goal of product lifecycle marketing is not to match the curve but to outline what may work best now and plan for the future.

Skilled product marketers shape the curve: speeding through the Introduction, increasing the slope of the Growth phase, extending the length of Maturity, and easing the pace of the Decline.

What is product lifecycle marketing?

Product lifecycle marketing aligns marketing efforts with a product’s lifecycle stage. Product lifecycle management includes all activities related to the product lifecycle, not just marketing. There are four stages:

  1. Introduction
  2. Growth
  3. Maturity
  4. Decline

Some iterations of the product lifecycle tweak names or add additional stages. The purpose of the product lifecycle illustration is not to debate semantics but, in a single view, to understand how product sales typically change over time. Your product lifecycle will not match it, nor should it.

Indeed, as Theodore Levitt noted decades ago, many products never make it beyond the first phase:


[M]ost new products don’t have any sort of classical life cycle curve at all. They have instead from the very outset an infinitely descending curve. The product not only doesn’t get off the ground; it goes quickly under ground—six feet under.

The benefits of looking at marketing through the lens of the product lifecycle are multifold:

  • Provides a broad understanding of how your product fits into lifecycles for the product class, form, or brand (e.g. petrol-engined cars, people-carrier, and Ford, respectively).
  • Identifies which stage you’re in to help you understand what you need to communicate and how to do so persuasively.
  • Spots the early signs of a pending transition to a new stage and suggests how your marketing campaigns should accommodate that change.

Of course, none of that matters if you can’t push your product past the Introduction phase.

1. Introduction

The Introduction stage is when the product first comes to market. It may include or—for those who want a more granular division of lifecycle stages—be preceded by a Development stage.

How to identify if you’re in the Introduction stage

If your product hasn’t yet launched, it’s obvious that you’re in this initial phase of the product lifecycle.

For recently launched products, marketers in the Introduction stage focus on creating  awareness and motivating potential buyers to consider a product—to be in the conversation when potential buyers consider their options.

What marketing needs to accomplish during the Introduction
  • Create product awareness and trial

The introductory stage is rarely profitable:

profits are negative or low because of the low sales and high distribution and promotion expenses [. . .] Promotion spending is relatively high to inform consumers of the new product and get them to try it. Because the market is not generally ready for product refinements at this stage, the company and its few competitors produce basic versions of the product.

Introduction phases of the product lifecycle are rarely profitable. (Image source)

Unlike successive stages, there’s no benefit to prolonging the Introduction phase. As a result, the primary marketing goal is to roll out campaigns that move quickly past Introduction to Growth: To build enough awareness that consumers know of the solution (if it’s a new market) or know of your company when considering the solution (if it’s an established market).

Research needs

Without a wealth of post-purchase consumer feedback, marketers in the Introduction stage may rely on user research conducted by others in the company, like the product management team. This may include data from test marketing or initial market research.

Questions marketers need to answer include:

  • Why did they create this product?
  • What problem does it solve?
  • How big is the market?
  • Who are the intended purchasers?
  • Which potential users or companies are influential in this space?

The answers help define which marketing messages and channels will be most influential.

As Lucas Weber notes in his product marketing course, it’s also essential to get marketing and sales teams’ buy-in for the product before they begin working on campaigns. Teams need to believe in what they’re selling before they can pitch it persuasively.

Questions marketing needs to answer

What is your go-to-market strategy? A go-to-market strategy identifies how the company plans to move through the Introduction stage and catalyze Growth. There are three main options, with hybrid options also possible:

  • Sales led. Customer acquisition costs can be high, and growth may require large sales teams. In the Introduction period, a sales-led strategy may work if the goal is to gain adoption by a small number of influential users or companies.
  • Marketing led. Marketing-led strategies often scale more efficiently than sales-led strategies, but rising PPC costs—especially in established markets—can price-out startups in some channels. A marketing-led strategy requires strong communication and data-sharing between marketing and sales teams—true demand generation.
  • Product led. For SaaS companies, product-led strategies have the potential to bypass the shortcomings of the other two. Free-trial or freemium offerings can scale awareness while limiting marketing outlays.

For marketers, the go-to-market strategy helps determine the primary call to action: A sales-led strategy may be a demo request, whereas a marketing-led strategy may focus on email signups, and a product-led strategy asks prospects to start a free trial.

For product-led companies like Ahrefs, a free-trial signup is the primary call to action.

How long is the expected product lifecycle? Not every company is preparing for a full product lifecycle. VC-backed companies that focus on user adoption—at the expense of profitability or even revenue—may never plan for phases beyond Introduction and Growth.

The latter phases become someone else’s problem, as they did with WhatsApp. The company focused on driving user adoption, and Facebook bought the company in 2014 without knowing how to monetize users. Zuckerberg felt that any platform with a half-billion users had potential, even if that required a $19 billion bet. (By 2018, WhatsApp had 1.5 billion users—but still no monetization.)

Additionally, products that require more market development may linger longer in the Introduction phase:

A proved cancer cure would require virtually no market development; it would get immediate massive support. An alleged superior substitute for the lost-wax process of sculpture casting would take lots longer.

Other products that qualify as fads or fashions expect shorter lifecycles—and sharper slopes between each phase:

How should you price your product? Products in the Introduction phase operate along the spectrum between two ends of a pricing strategy. Each has strong implications for the target audience and which go-to-market strategy may be most effective:

  • Skim the cream. Seek out the early adopters who are price insensitive. The strategy is to recoup product investment costs quickly at the expense of larger but more gradual profits in years later. A high price point also invites competition, unless a product enjoys patent protection.
In Wall Street, Gordon Gekko is the quintessential price-insensitive early adopter for a product that was then part of an emerging market. (Image source)
  • Rapid market penetration. A lower price point may increase adoption and keep competition at bay for longer, but also takes longer to achieve profitability. Low pricing has other risks, and faster market penetration isn’t always better:

“Some products that are priced too low at the outset [. . .] may catch on so quickly that they become short-lived fads. A slower rate of consumer acceptance might often extend their life cycles and raise the total profits they yield. The speed of market penetration depends on the availability of marketing resources—a higher potential speed should increase the rate of adoption.

Signs you’re headed to the Growth stage

When marketing conversations focus more on competitors—rather than creating a market or earning consideration among potential buyers—you’re headed toward the Growth phase of product lifecycle marketing.

Another sign: You’ve moved past the initial product launch and must consider how to promote product updates to an existing consumer base or use them as a means to acquire new buyers.

2. Growth

The Growth stage is the period with the sharpest increase in sales. It includes a significant boost in market presence, the addition of new product features, and a greater emphasis on positioning relative to the competition.

How to identify if you’re in the Growth stage

Levitt explains one way to determine if your company is in the Growth period: “Instead of seeking ways of getting consumers to try the product, the originator now faces the more compelling problem of getting them to prefer his brand.”

If you’re in an emerging market, you may lure new competitors into the field. If you’re in an established market, you may go head-to-head with industry stalwarts more often.

Importantly, Levitt goes on to note, competitors shape marketing campaigns during the Growth stage:

The presence of competitors both dictates and limits what can easily be tried—such as, for example, testing what is the best price level or the best channel of distribution.

There is one benefit of increased competition: Your competitors’ campaigns may help increase overall demand for your product class, which allows marketing teams to shift resources back toward promoting your product.

What marketing needs to accomplish during the Growth stage
  • Maximize market share

During the Growth period, the marketing team works to widen their audience and build brand preference.  

Research needs

What have you learned about your consumers? By the Growth stage, you should have enough consumer data to begin building marketing strategies based on consumer experiences with your product, rather than consumer attitudes or behaviors related to the problem your product solves or competitor offerings.

Qualitative user research has many sources:

That research, in turn, affects marketing plans:

How does your company and product compare to competitors? Also during the Growth phase, you’ll dive deeper into competitor analyses to determine how your value proposition compares and which aspects of your product or brand will help differentiate.

Questions marketing needs to answer

How are you positioning your product? If your current product is more advanced than competitors, you may be best positioned to earn a dominant market share—which means pushing more resources into current campaigns (at the expense of near-term profits).

If you’re product development lags behind, you may have more success competing on value, or by developing a brand—even a brand-owned term—that makes an emotional appeal to consumers.

Drift built a brand identity—and sustained growth—on “conversational marketing.”  

How will you pitch new product features? To deploy the right marketing strategy, you need to know the rate of product development—how (and how often) the product team rolls out updates. Big updates usually require a big marketing push. If your product team pushes big releases infrequently, marketing teams need to plan in advance:

  • Is a product update big enough to justify its own ad campaign? Will you need a dedicated budget, content, or landing pages?
  • Is the update best suited to attract new customers or retain existing ones?
  • Does a particular update target a subset (or new cohort) of your target audience? Do you need to source tailored quotes or case studies to make a persuasive case to that audience?

In contrast, for smaller updates, marketing may focus on in-product messaging. In SaaS platforms—especially product-led ones—that often includes tooltips, notifications, or tutorials that appear within the product’s UI.

Does the sales staff have what it needs? Many companies in the Growth stage are adding sales staff to reach new markets or increase penetration in existing ones.

Marketing is often in the middle: learning about product development from product managers and translating that knowledge into persuasive collateral for sales teams.

Signs you’re headed to the Maturity stage

Growth transitions to Maturity as the rise in sales (not profits) levels out. By the end of the Growth stage, most people who want your product have it (or a competitors’), and the focus shifts toward winning customers from competitors and making marketing more efficient.

3. Maturity

The Maturity phase represents the height of a product’s adoption and profitability. The height of the apex depends on past achievements during the Growth period; the length of the Maturity phase depends on how long marketing can sustain the product’s dominant market position.

How to identify if you’re in the Maturity stage

“The first sign of its advent,” Levitt argues, “is evidence of market saturation. This means that most consumer companies or households that are sales prospects will be owning or using the product.”

In practical terms, it means that most “new” customers are not new to your product class but, perhaps, new to your brand—they’ve switched from a competitor. Conversely, many of your losses are wins for your competitors.

A clear sign of market saturation? Every company is pitching a switch. What marketing needs to accomplish during the Maturity stage
  • Maximize profit while defending market share

Levers that worked during the Introduction and Growth periods have a lesser impact during the Maturity phase. More competitors and an established market likely mean that differentiation in product features declines, and you may have competitors at both ends of the spectrum—those who offer the product more cheaply and those who offer a higher-end version.

One lever doesn’t weaken: brand. Companies like Drift have risen to the top of a crowded market, in part, because of a strong brand identity—in this case, one centered on “conversational marketing.” For other organizations, like HubSpot, which owns “inbound marketing,” the brand has helped maintain their status as a market leader.

In short, as differentiation based on product features becomes more difficult, brand becomes a powerful and stable differentiator—essential for companies that seek a long Maturity period.

Research needs

What does market and consumer research suggest? In a mature market, the key to extending the (profitable) lifespan of a product is to understand the catalysts that lead to Decline.

Market and consumer research, which, in a mature market, may be conducted by third parties like Gartner or Forrester, can help suggest where the market is headed. That research may include which demographics are likely to drop off first (or stay the longest), helping to steer marketing campaigns toward higher-value consumers.

What is the sales staff learning? A mature product is more likely to have a large sales staff with deep consumer knowledge. (The research methods outlined in the Growth phase are still valuable.) Their learnings can help you understand which aspects of your product are stickiest or which decision points consumers use to choose your product or a competitors’.

Questions marketing needs to answer

How are you building or sustaining the brand? While early phases may focus on differentiation via product features (or, if you were the only company, not differentiating at all), the Maturity phase rewards companies that can differentiate their product on brand.

Part of that effort, Weber details, comes down to the balance between persuasive vs. descriptive marketing. A greater focus on brand will also yield a greater focus on persuasive marketing—campaigns that make an emotional rather than rational appeal to consumers.

What is your most profitable demographic? Prolonging the Maturity period also requires an increase in marketing efficiency—greater efficiency sustains or grows the profitability of a product, even as total sales are flat.

Efficiency could come from shifting marketing resources toward the most valuable demographic, or via promotion of the most profitable versions of a product (e.g. basic vs. enterprise-level packages).

Clever marketing and a strong brand can extend the maturity phase. (Image source)

How can you prolong the Maturity phase? Marketers can extend the Maturity period in several ways:

  • Identifying new demographics for the product. Is there a subset of potential users that, if marketing..
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Years ago, when I first started split-testing, I thought every test was worth running. It didn’t matter if it was changing a button color or a headline—I wanted to run that test.

My enthusiastic, yet misguided, belief was that I simply needed to find aspects to optimize, set up the tool, and start the test. After that, I thought, it was just a matter of awaiting the infamous 95% statistical significance.

I was wrong.

After implementing “statistically significant” variations, I experienced no lift in sales because there was no true lift—“it was imaginary.” Many of those tests were doomed at inception. I was committing common statistical errors, like not testing for a full business cycle or neglecting to take the effect size into consideration.

I also failed to consider another possibility: That an “underpowered” test could cause me to miss changes that would generate a “true lift.”

Understanding statistical power, or the “sensitivity” of a test, is an essential part of pre-test planning and will help you implement more revenue-generating changes to your site.

What is statistical power?

Statistical power is the probability of observing a statistically significant result at level alpha (α) if a true effect of a certain magnitude is present. It’s your ability to detect a difference between test variations when a difference actually exists.

Statistical power is the crowning achievement of the hard work you put into conversion research and properly prioritized treatment(s) against a control. This is why power is so important—it increases your ability to find and measure differences when they’re actually there.

Statistical power (1 – β) holds an inverse relationship with Type II errors (β). It’s also how to control for the possibility of false negatives. We want to lower the risk of Type I errors to an acceptable level while retaining sufficient power to detect improvements if test treatments are actually better.

Finding the right balance, as detailed later, is both art and science. If one of your variations is better, a properly powered test makes it likely that the improvement is detected. If your test is underpowered, you have an unacceptably high risk of failing to reject a false null.

Before we go into the components of statistical power, let’s review the errors we’re trying to account for.  

Type I and Type II errors Type I errors

A Type I error, or false positive, rejects a null hypothesis that is actually true. Your test measures a difference between variations that, in reality, does not exist. The observed difference—that the test treatment outperformed the control—is illusory and due to chance or error.

The probability of a Type I error, denoted by the Greek alpha (α), is the level of significance for your A/B test. If you test with a 95% confidence level, it means you have a 5% probability of a Type I error (1.0 – 0.95 = 0.05).

If 5% is too high, you can lower your probability of a false positive by increasing your confidence level from 95% to 99%—or even higher. This, in turn, would drop your alpha from 5% to 1%. But that reduction in the probability of a false positive comes at a cost.

By increasing your confidence level, the risk of a false negative (Type II error) increases. This is due to the inverse relationship between alpha and beta—lowering one increases the other.

Lowering your alpha (e.g. from 5% to 1%) reduces the statistical power of your test. As you lower your alpha, the critical region becomes smaller, and a smaller critical region means a lower probability of rejecting the null—hence a lower power level. Conversely, if you need more power, one option is to increase your alpha (e.g. from 5% to 10%).

Type II errors

A Type II error, or false negative, is a failure to reject a null hypothesis that is actually false. A Type II error occurs when your test does not find a significant improvement in your variation that does, in fact, exist.

Beta (β) is the probability of making a Type II error and has an inverse relationship with statistical power (1 – β). If 20% is the risk of committing a Type II error (β), then your power level is 80% (1.0 – 0.2 = 0.8). You can lower your risk of a false negative to 10% or 5%—for power levels of 90% or 95%, respectively.

Type II errors are controlled by your chosen power level: the higher the power level, the lower the probability of a Type II error. Because alpha and beta have an inverse relationship, running extremely low alphas (e.g. 0.001%) will, if all else is equal, vastly increase the risk of a Type II error.

Statistical power is a balancing act with trade-offs for each test. As Paul D. Ellis says, “A well thought out research design is one that assesses the relative risk of making each type of error, then strikes an appropriate balance between them.”

When it comes to statistical power, which variables affect that balance? Let’s take a look.

The variables that affect statistical power

When considering each variable that affects statistical power, remember: The primary goal is to control error rates. There are four levers you can pull:

  1. Sample size
  2. Minimum Effect of Interest (MEI, or Minimum Detectable Effect)
  3. Significance level (α)
  4. Desired power level (implied Type II error rate)
1. Sample Size

The 800-pound gorilla of statistical power is sample size. You can get a lot of things right by having a large enough sample size. The trick is to calculate a sample size that can adequately power your test, but not so large as to make the test run longer than necessary. (A longer test costs more and slows the rate of testing.)

You need enough visitors to each variation as well as to each segment you want to analyze.  Pre-test planning for sample size helps avoid underpowered tests; otherwise, you may not realize that you’re running too many variants or segments until it’s too late, leaving you with post-test groups that have low visitor counts.

Expect a statistically significant result within a reasonable amount of time—usually at least one full week or business cycle. A general guideline is to run tests for a minimum of two weeks but no more than four to avoid problems due to sample pollution and cookie deletion.

Establishing a minimum sample size and a pre-set time horizon avoids the common error of simply running a test until it generates a statistically significant difference, then stopping it (peeking).

2. Minimum Effect of Interest (MEI)

The Minimum Effect of Interest (MEI) is the magnitude (or size) of the difference in results you want to detect.

Smaller differences are more difficult to detect and require a larger sample size to retain the same power; effects of greater magnitude can be detected reliably with smaller sample sizes. Still, as Georgi Georgiev notes, those big “improvements” from small sample sizes may be unreliable:

The issue is that, usually, there was no proper stopping rule nor fixed sample size, thus the nominal p-values and confidence interval (CI) reported are meaningless. One can say the results were “cherry-picked” in some sense.

If there was a proper stopping rule or fixed sample size, then a 500% observed improvement from a very small sample size is likely to come with a 95% CI of say +5% to +995%: not greatly informative.

A great way to visualize the relationship between power and effect size is this illustration by Georgiev, where he likens power to a fishing net:

3. Statistical Significance

As Georgiev explained:

An observed test result is said to be statistically significant if it is very unlikely that we would observe such a result assuming the null hypothesis is true.

This then allows us to reason the other way and say that we have evidence against the null hypothesis to the extent to which such an extreme result or a more extreme one would not be observed, were the null true (the p-value).

That definition is often reduced to a simpler interpretation: If your split-test for two landing pages has a 95% confidence in favor of the variation, there’s only a 5% chance that the observed improvement resulted by chance—or a 95% likelihood that the difference is not due to random chance.

“Many, taking the strict meaning of ‘the observed improvement resulted by random chance,’ would scorn such a statement,” contended Georgiev. “We need to remember that what allows us to estimate these probabilities is the assumption the null is true.”

Five percent is a common starting level of significance in online testing and, as mentioned previously, is the probability of making a Type I error. Using a 5% alpha for your test means that you’re willing to accept a 5% probability that you have incorrectly rejected the null hypothesis.

If you lower your alpha from 5% to 1%, you are simultaneously increasing the probability of making a Type II error, assuming all else is equal. Increasing the probability of a Type II error reduces the power of your test.

4. Desired Power Level

With 80% power, you have a 20% probability of not being able to detect an actual difference for a given magnitude of interest. If 20% is too risky, you can lower this probability to 10%, 5%, or even 1%, which would increase your statistical power to 90%, 95%, or 99%, respectively.

Before thinking that you’ll solve all of your problems by running tests at 95% or 99% power, understand that each increase in power requires a corresponding increase in the sample size and the amount of time the test needs to run (time you could waste running a losing test—and losing sales—solely for an extra percentage point or two of statistical probability).

So how much power do you really need? A common starting point for the acceptable risk of false negatives in conversion optimization is 20%, which returns a power level of 80%.

There’s nothing definitive about an 80% power level, but the statistician Jacob Cohen suggests that 80% represents a reasonable balance between alpha and beta risk. To put it another way, according to Ellis, “studies should have no more than a 20% probability of making a Type II error.”

Ultimately, it’s a matter of:

  • How much risk you’re willing to take when it comes to missing a real improvement;
  • The minimum sample size necessary for each variation to achieve your desired power.
How to calculate statistical power for your test

Using a sample size calculator or G*power, you can plug in your values to find out what’s required to run an adequately powered test. If you know three of the inputs, you can calculate the fourth.

In this case, using G*Power, we’ve concluded that we need a sample size of 681 visitors to each variation. This was calculated using our inputs of 80% power and a 5% alpha (95% significance). We knew our control had a 14% conversion rate and expected our variant to perform at 19%:

In the same manner, if we knew the sample size for each variation, the alpha, and the desired power level (say, 80%), we could find the MEI necessary to achieve that power—in this case, 19%:

What if you can’t increase your sample size?

There will come a day when you need more power but increasing the sample size isn’t an option.  This might be due to a small segment within a test you’re currently running or low traffic to a page.

Say you plug your parameters into an A/B test calculator, and it requires a sample size of more than 8,000:


If you can’t reach that minimum—or it would take months to do so—one option is to increase the MEI. In this example, increasing the MEI from 10% to 25% reduces the sample size to 1,356 per variant:

But how often will you be able to hit a 25% MEI? And how much value will you miss looking only for a massive impact? A better option is usually to lower the confidence level to 90%—as long as you’re comfortable with a 10% chance of a Type I error:

So where do you start? Georgiev conceded that, too often, CRO analysts “start with the sample size (test needs to be done by <semi-arbitrary number> of weeks) and then nudge the levers randomly until the output fits.”

Striking the right balance:

Conclusion

Statistical power helps you control errors, gives you greater confidence in your test results, and greatly improves your chance of detecting practically significant effects.

Take advantage of statistical power by following these suggestions:

  1. Run your tests for two to four weeks.
  2. Use a testing calculator (or G*Power) to ensure properly powered tests.
  3. Meet minimum sample size requirements.
  4. If necessary, test for bigger changes in effect.
  5. Use statistical significance only after meeting minimum sample size requirements.
  6. Plan adequate power for all variations and post-test segments.

The post Harnessing Statistical Power for Test Results You Can Trust appeared first on CXL.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Good user research asks the right questions to the right people. If you fail on either account, you may make million-dollar decisions on bad data.

Leading questions are an easy way to poison your data. A leading question is “a question asked in a way that is intended to produce a desired answer.”

If you’ve worked in marketing or sales, you know leading questions well: They’re wonderfully effective at guiding consumers toward a “yes” for a product or service. (“Would you like to lose 10 lbs. without leaving the couch?!”)

That power is the same reason they’re so dangerous to user research, especially since a “leading question” can result from factors beyond the words in the question:

  • question topic;
  • order in which it’s asked;
  • answers options;
  • survey-wide aspects.

All have the potential to lead respondents to “a desired answer”—and ruin your data.

How a single phrase can shape responses

Early in Norman M. Bradburn’s classic, Asking Questions: The Definitive Guide to Questionnaire Design—for Marketing Research, Political Polls, and Social and Health Questionnaires, a core source for this post, the author illustrates how a subtle shift in language affects responses:

Two priests, a Dominican and a Jesuit, are discussing whether it is a sin to smoke and pray at the same time. After failing to reach a conclusion, each goes off to consult his respective superior. The next week they meet again.

The Dominican says, “Well, what did your superior say?”

The Jesuit responds, “He said it was all right.”

“That’s funny,” the Dominican replies. “My superior said it was a sin.”

The Jesuit says, “What did you ask him?”

The Dominican replies, “I asked him if it was all right to smoke while praying.”

“Oh,” says the Jesuit. “I asked my superior if it was all right to pray while smoking.”

The potential for a single phrase to change responses is real and common, as U.S. survey results suggest:

Example 1

  • in cases of incurable disease, doctors should be allowed to “assist the patient to commit suicide”: 51% agree
  • in cases of incurable disease, doctors should be allowed to “end the patient’s life by some painless means”: 70% agree

Example 2

  • “having a baby outside of marriage” is morally wrong: 36% agree
  • “an unmarried woman having a baby” is morally wrong: 26% agree

Example 3

  • “Do you think the United States should allow public speeches against democracy?” 21% agree
  • “Do you think the United States should forbid public speeches against democracy?” 39% agree

There may not be a correct wording for some questions, but every choice has an impact. And, as Bradburn notes, the complexities can compound:

  • Weakly held attitudes are more vulnerable to changes in phrasing.
  • Any change to phrasing from year-to-year—even a change that improves the accuracy of the responses—can invalidate comparisons from prior years.

To improve the accuracy of responses, you need to know how variables in your questions, answers, and surveys can—intentionally or not—lead respondents to a given answer.  

Leading questions

When you think of leading questions, you likely think first of the language—the words and phrasing—of those questions. But the question type, topic, and order can be equally influential.

Question language

Which of the following is a leading question?

  • “Does your employer or his representative resort to trickery in order to defraud you of your part of your earnings?”
  • “With regard to earnings, does your employer treat you fairly or unfairly?”

The former, a blatantly leading question, was how Karl Marx framed it in early surveys of workers. The latter, defanged of words like “trickery” and “defraud,” offers a more neutral question.

Intentionally leading questions such as Marx’s are unlikely to plague your survey. But subtle choices can be influential. For example:

  • Is the new design easier to use than the old one? The use of “new” and “old” cues respondent expectations, which are also primed to consider whether the changes make the website “easier” to use.
  • Was one design easier or harder to use than another? This phrasing eliminates the bias introduced by old vs. new and gives equal weight to a positive or negative experience.

Additionally, some words, though seemingly interchangeable, have connotations that skew results. For example, asking respondents about “welfare”—a politically charged topic—yields far different levels of support compared to “assistance for the poor”:

In the context of web design, it’s easy to think of similar examples: calling content an “ad” instead of “sponsored” or identifying an element as a “pop-up” rather than a “lightbox” may shape responses.

Similar to leading questions, two other types of questions can also bias response data:

Double-barrelled questions

Conjunctions pose risks to questions. Both “and” and “or” often result in double-barreled questions, which force respondents to respond to two things simultaneously:

  • “Are you satisfied with the pay and benefits at your office?”
  • “How would you describe your experience trying to find blog or webinar content?”

While double-barreled questions are technically distinct from leading questions, the end result is similar: Poor phrasing leads respondents to provide inaccurate information.

Loaded questions

Unlike leading questions, which suggest the desired answer, loaded questions assume one:

  • “Was it easier to navigate the new design?” (leading)
  • “Which of the design improvements was your favorite?” (loaded)

A loaded question is, in effect, a “hard” lead: Respondents have no choice but to agree, and their answer merely justifies the agreement.

The choice of language becomes especially critical when questions tackle sensitive topics.

Question topic

“Although respondents are motivated to be ‘good respondents’ and to provide the information that is asked for,” writes Bradburn, “they are also motivated to be ‘good people.’”

In surveys, social desirability bias—respondents’ desire to be perceived as moral, smart, healthy, or any other valued characteristic—can transform seemingly innocuous questions into leading ones and, as a result, skew results. (That bias is stronger, according to Bradburn, when respondents answer questions in person or over the phone.)

Take a simple question: “On average, how much time do you spend on social media each day?” Social desirability bias may lead to underreporting—devoting whole evenings to scrolling through Facebook’s News Feed isn’t something to brag about.

To get more accurate responses, the language of the question needs to offer “outs”—ways to mitigate the impact of social desirability bias. The best way to do so depends on whether the concern is underreported or overreported behavior, or for knowledge-based questions.

Underreported behavior

Leading questions, can at times, improve the accuracy of responses. Adding an opening clause to normalize behavior can make respondents feel more comfortable:

  • “The average person spends more than two hours per day on social media. How much time, on average, do you spend?”

You can also use a loaded question (as long as you retain a “None” option) to suggest that “everyone is doing it”:

  • “How many cigarettes do you smoke each day?”

A third alternative is to add an authority’s perspective:

  • “The Mayo Clinic reports that red wine may be heart healthy. How often do you drink red wine?”
Overreported behavior

“Do you jog?” “How many books did you read this year?” Neither question is of great consequence, but both risk overreporting due to social desirability bias.

Countering that bias can be as simple as adding a short phrase to normalize a negative response:

  • “Do you happen to jog, or not?”
  • “How many books, if any, did you read this year?”

Similar phrases are useful for other question types. Consider the difference between these two:

  • “What do you like about…?”
  • “What, if anything, do you like about…?”

The latter, Estée Lauder learned, led respondents to choose “nothing” more often.

For similar questions, Bradburn offers another solution—provide reasons why someone may not perform the behavior:

The risk of overreporting increases when the behavior is uncommon. As a classic study demonstrated, a question as basic as “Do you own a library card?” vastly inflated reporting.

Owning a library card was socially desirable but, in the location of the study, a minority of the population had one. The relative rarity of card-holding inflated results more than, say, in modern times, a survey of seat belt usage would (since most people wear seatbelts).

The need to provide an “out” applies to knowledge-based questions as well.

Knowledge-based questions

No one wants to come off as an idiot. That desire can lead to overreporting for knowledge-based questions, such as brand-awareness surveys. Respondents believe that they should know an answer, so they’re more likely to check yes.

Neutralize knowledge-based questions with phrases that suggest the knowledge is not expected:

  • “Do you happen to know…”
  • “As far as you know…”
  • “Can you recall offhand…”

Another strategy is to turn knowledge questions into opinion questions. The symptoms and likely age of onset for breast cancer are known, but reframing the knowledge-based questions as opinions frees respondents to give a candid account of their beliefs:

Question type

Unipolar questions—those that consider only one side of the response—can operate as leading questions due to acquiescence bias. Respondents, especially those with limited knowledge of a topic, are more likely to agree with a statement.

Unipolar questions or prompts, such as those in an agree-disagree format, don’t offer an alternative. For example, on the “strongly agree to strongly disagree” spectrum, how would you respond to the following prompt?

  • “Ads are the best way for news websites to earn money.”

How would you respond if that same prompt were changed to a forced-choice format?

  • “Ads are the best way for news websites to earn money.”

OR

  • “Paywalls are the best way for news websites to earn money.”

Acquiescence bias aside, the initial statement likely triggers negative feelings. (Who likes ads?) The second option that includes the primary alternative—one that requires you to open your wallet—may cause respondents to reconsider.

A Pew Research Center poll highlights the potential divide:

The advantage of bipolar questions is that they make respondents aware of options at both ends of the spectrum—rather than leading respondents to focus on one option in isolation. (Be cautious: A poorly written bipolar question may also introduce a false choice.)

Question order

The “foot-in-the-door” technique suggests that yeses beget yeses. If you start with a modest request then follow up later with a larger request, you increase your chances of succeeding with the larger request.

In the context of leading questions, it means that the answer to a previous question influences the answer to a subsequent one. Leading questions, in other words, result not just from content within a question but also the content preceding a question.

Here are two ways it can happen.

General vs. specific questions

“When a general question and a more specific-related question are asked together,” explains Bradburn, “the general question is affected by its position, whereas the more specific question is not.” The specific question, if it comes first, may lead respondents to their answer for the general one.

For example, take two questions on marketing knowledge, ordered from general to specific:

  • “How would you rate your marketing team’s overall knowledge?”
  • “How would you rate your marketing team’s knowledge of multivariate testing?”

The above order is more likely to generate accurate responses. If the order were reversed, the initial question on multivariate testing may affect the more general one in two ways:

  1. A lack of knowledge of multivariate testing may make responders more pessimistic about their teams’ overall knowledge.
  2. The general question may be misinterpreted as referring to everything except knowledge of multivariate testing.

A secondary benefit of asking the more general question first is that it makes responses comparable to other surveys (assuming the other surveys asked the more general question first as well).

Underreported behaviors

Question order can also help gather more accurate data by using the foot-in-the-door technique—leading the respondent to admit a socially undesirable behavior.

Bradburn shares an example from a survey seeking to learn about shoplifting. The order of questions starts with more serious criminal behavior to make the real target of the survey, shoplifting, appear less deviant:

In some instances, the only way to eliminate a leading question is to “make the biased choice that is implicit in the question wording explicit in a list of choices that include alternatives.”

Answers’ role in leading questions

To some extent, the distinction between question formulation and techniques for recording answers is an artificial one, because the form of the question often dictates the most appropriate technique for recording the answer—that is, some questions take on their meaning by their response categories. – Norman M. Bradburn

Even with a perfectly framed question, the available answer choices—and the order in which they appear—can bias responses and reverse engineer a leading question. The biggest difference in answer formats is between open-ended and closed-ended responses.

Open-ended responses

Open-ended responses are often preferred for user research—rather than boxing users into a set of predetermined answers, they provide an opportunity for people to communicate, in their own words, what’s most important to them.

That often makes it easier to avoid leading questions: You can ask broader questions that don’t suggest norms or limit responses to four or five choices. On the other hand, questions with open-ended responses assume that the respondent considers all options.

For example, when Pew asked respondents about the most important issues for choosing a president, the economy became the dominant issue only in the closed-ended version:

Whether closed-ended responses are helpful reminders—perhaps the economy was the most influential factor but simply hard to recall—or distorting elements is unclear.

One solution is to run an open-ended survey with a pilot group (or for the first year of an annual survey), then use those responses to create a closed-ended version that accurately reflects the range of responses.

Open-ended questions are also useful when you want to avoid biasing respondents by normalizing a range. Respondents, Bradburn notes, tend to avoid extreme answers, so they’re less likely to choose the top-end of a range for undesirable behaviors or the bottom end for desirable behaviors.

The open-ended strategy may work well when respondents aren’t sure of the “normal” range, like the frequency with which people eat beef for dinner:

Open-ended responses, do, however, require more time to code answers and, if coding is done improperly, can introduce errors. A biased coding effort can undermine even the most neutral set of questions and answers.  

There’s another reason that, in some instances, closed-ended responses have benefits: They can intentionally normalize behavior or attitudes that respondents may be loath to report.

Closed-ended responses

For better and worse, closed-ended responses set boundaries and suggest norms. Those delineations, in turn, influence responses to questions:

  • Yes-or-no response options may force a false degree of certainty. A Likert scale can more accurately reflect likelihood (e.g. “How likely are you to buy this product?” instead of “Would you buy this product?”)
  • Numerical ranges suggest averages and extremes. The social desirability of an attitude or behavior may shift toward the end of the scale favorable to the responder.
  • A scale with an even number of options forces an opinion. An odd number of options—with a neutral response in the middle—allows respondents to sit on the fence.

While the establishment of norms represents a risk, it’s also an opportunity. If you’re concerned about underreporting, a range with an artificially high top end can normalize behavior. (The inverse is also true.)

Consider the implicit judgment in these two potential response ranges:

An “average” watcher in the left-hand version qualifies as an “extreme” viewer on the right. If you’re looking to get honest feedback about a socially undesirable behavior (even a benign one like binge-watching Netflix), the version on the left likely puts more respondents at ease.

Knowledge questions

For knowledge questions with closed-ended responses, “sleeper answers” and an “I don’t know” option can catch and prevent inaccurate answers that respondents may feel compelled to select.

Sleeper answers. Sleeper answers can help manage social desirability bias. If you’re worried that respondents may claim to recognize concepts or brands that, in fact, they do not, one solution is to add a false choice:

If 20% of respondents choose Duck Peninsula, awareness of other brands, contends Bradburn, may be inflated to a similar degree.

“I don’t know.” Keeping an “I don’t know” response available for knowledge questions helps normalize a null response and reduces wild guesses.

The order in which those answers appear matters as well.

Answer orders

For surveys in which respondents can’t see the answers—those over the phone or in face-to-face interviews—the serial position effect plays a role. The limitations of human attention and memory make early response options more likely to be selected.

One solution to combat the serial position effect—as well as any potential effect from response order—is to vary the order of the..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

No other marketing channel builds lifelong relationships like email. It’s a prime reason that is the preferred marketing channel for customer acquisition and customer retention (80% and 81%, respectively).

Rich media is one of email’s greatest advantages, transforming email—once plain-text messages suitable only for interdepartmental communication—into a robust marketing channel.

Yet, rich media email is a double-edged sword: Execute it correctly, and it adds visual delight. Mess up, and your email may not be delivered or convey your message.

This post walks you through each type of rich media in email and how to deploy it for maximum impact, deliverability, and accessibility.

Which types of rich media are available for email?

Rich media “includes advanced features like video, audio, or other elements that encourage viewers to interact and engage with the content.” In email, rich media includes static images, animated GIFs, videos, audio snippets, and even CSS animations.

Rich media elements can break up the monotony of text blocks, and, in certain situations, explain and persuade more effectively than words alone.

Guidelines and limitations for rich media in email

Regardless of the rich media type, there are overarching guidelines for including rich media in your emails:

Image dimensions. Emails are still designed for a 600px width, so to avoid images or GIFs getting clipped, keep them within 480–500px. (Retain the remaining area for padding.) You can create your images/videos with a width of 500px or insert them within a 500px container.

File size. All email content is downloaded from an external site when you open the email. Large file sizes slow download time and burn through mobile data.

Ideally, images and GIFs should be as small as possible. Recommendations vary, but we suggest no more than 400 KB. If you can retain image quality, smaller is better. Videos should not be longer than 1 minute (to limit potential data usage and maintain the user’s attention span).

Gmail is notorious for clipping email messages if the underlying code has a file size greater than 102 KB. Since CSS3 animations involve additional lines of code, ensure that your HTML template does not exceed the limit.

Compatibility. Modern email clients support most rich media formats. For non-compliant email clients such as Outlook (which doesn’t support GIFs, background images, or videos), you need to appropriate fallback. More on that later.

Here are six types of rich media and how to use them in emails.

1. Static images in emails

Any form of imagery—photos, illustrations, etc.—without movement is categorized as a still image, the basic form of rich media in emails. JPG (JPEG) and PNG are two common file formats to serve static images.

JPGs have smaller file sizes but also lose some detail during compression; PNG files retain the details at the cost of a larger file size. For images that have sharp contrasts (e.g. drawings) or need to preserve embedded text, PNG is usually the better choice.

Static images are a great way to represent ideas graphically instead of using text. Instead of subscribers reading about a car design, images of the car from different angles can help them understand.

Consistency in image use can also help set your brand identity. In the email below from Craftsman, the same on-brand shade of red ties together the background, hero image, and product images:

You can also add images as backgrounds for your email copy. A background image improves the visual impact and—if chosen well—highlights the headline.

In the email below from Quip, the hero image adds visualize stimulus without detracting from or competing with the headline:

Limitations

For security reasons, most email clients block images from unknown senders. Unless explicit permission is given by your subscriber to auto-download images, the email they’ll see will look like the one below:

One way to manage this limitation is ALT text. Even if the image doesn’t load, the subscriber can read what the image represents; ALT text also improves accessibility for email recipients who rely on screen-readers.

In the above email by Rubi, the left section shows the email with images disabled and appropriate ALT txt.

Going a step further, you can create pixel art to reveal images even when image files are disabled. The trick is to slice the image in such a way that each section can be colored or stylized to create beautiful images similar to 8-bit images.

In an email by Pizza Express, the cinemagraph is sliced so that a champagne flute glass is still visible as pixel art:

Even though a picture can help visualize your message, it cannot replace your copy. Ideally, you need to maintain an 80:20 text-to-image ratio in your email layout to avoid getting flagged as SPAM. (Using a single image as the entire email content also raises the suspicion of the email being SPAM.)

Even if SPAM isn’t a consideration, using too many images could cause subscribers with image blockers to miss the message.

2. Animated GIFs in emails

By sequentially changing frames at a pre-set duration, a phenomenon called the “Persistence of Vision,” GIFs trick the eye into seeing animation.

GIFs showcase movements and complex concepts more easily than text or static images, which can help increase conversions. Dell managed to score a 109% conversion increase by including a GIF that showed the movement of their convertible laptops:

Moreover, since a GIF changes between frames, you can experiment with different frame durations. In the below example by Kate Spade, different frames display different colors of the same product:

In another example, by Moo Print, a static image would have sufficed, but the motion of the hand immediately draws your attention:

Sample use cases
  • Showcase variations of a product in the same space.
  • Demonstrate how your product works.
  • Create a slideshow to show multiple images.
  • Provide a 360-degree view of a product.
Limitations

Owing to rendering-engine limitations, Outlook and Lotus Notes do not support GIF animations—only the first frame will be displayed. This makes GIF usage for B2B emails tricky; a larger segment of your audience likely uses Outlook. Either keep vital information in the first frame or use the GIF to convey non-essential information.

The more frames in a GIF, the larger the file size. This is double whammy: You’ll increase data use for subscribers on limited data plans, and the subscriber will see only a blank box until the GIF loads.

3. Cinemagraphs in emails

Cinemagraphs are GIFs with a twist—only one element in the background is animated, while the rest of the image is static. As this example from Netflix shows, cinemagraphs can generate a striking visual impact:

Sample use cases
  • Media companies can send out email that better matches the aesthetic of the final product, like a television series or film.
  • Brands rolling out new products can add a dramatic effect.
Limitations

Cinemagraphs share the same limitations as other GIFs.

4. Videos in emails

Videos maximize the amount of information you convey in limited email real estate. A short instructional video clip can convey more information than an ebook.

As elsewhere on the Internet, autoplaying videos is not an ideal user experience. (Including a click-to-play button also allows you to measure user interest in email videos.)

Adding videos to emails requires uploading your video to a third-party site (e.g. YouTube, Wistia, Vimeo, etc.) and embedding the URL in your email.

Sample use cases
  • A short intro video in an onboarding email can help set expectations for subscribers.
  • An explainer video can demonstrate how to use a physical product.
  • Real estate agencies, tourist companies, or universities can offer virtual tours.
  • Companies can humanize their brand with messages from founders or employees.
Limitations

Embedded videos are an HTML5 property and supported only by a few email clients, such as Thunderbird, Apple Mail, iOS mail, and Android native (not Gmail mobile).

You need to use an animated GIF or image with a link to the video as a fallback. Layer it beneath the video frame and, using conditional coding, make it visible only to non-supporting email clients.

As with large images or GIFs, videos add to the amount of data it costs customers to engage with your email.

5. Audio in emails

Background music sets the tone in movies; it can do the same in email—if used appropriately. (There’s a reason websites no longer autoplay music when you land on the homepage…)

In the below email, an instrumental version of the song “Closer” by The Chainsmokers plays in the background while an animated GIF is on a loop.

To help set the tone, the email below adds spooky sounds. Experience it for yourself here.

Sample use cases
  • If you’re going all in on “tone,” audio adds another element.
  • For audio-related promotions (e.g. music events), an audio clip may make sense.
Limitations

Email audio is supported only in Apple Mail, iOS mail, Native Android, and Samsung Mail. While this may not be a major concern—the email visuals would still render—subscribers may not notice the audio unless you specify its existence.

Audio clips add file size to the email, though a small audio clip played on a loop or hosted on a third-party platform can reduce the download time.

6. CSS animation in emails

With the help of JavaScript and (less and less often) Flash, websites have visually impressive animations and transitions to capture attention. Email clients don’t support JavaScript or Flash due to security reasons.

Yet with the adoption of CSS3 animations, email developers can replicate certain web-based interactivity into email:

Keyframe animations are possible thanks to CSS. Keyframe animations transform and move email elements based on a keyframe, inducing an illusion of an animated element:

In the above email by Penguin Random House, the bus remains fixed while the background scrolls, giving an illusion of the bus moving across a fixed road. (Image source)
The above email, titled “Super Mail Quest,” is an entire game within the email: The scenes change based on user choices. (You can try your luck here.)
The above email replicates a bike lock, revealing a coupon code when a user sets the correct combination. (Image source)
Sample use cases Limitations

Email clients are gradually supporting CSS3, but only a handful fully support all CSS3 effects. This is manageable as long as you provide an appropriate fallback for the non-supporting email clients. Adding a “View Online” link can ensure that everyone gets a (near) uniform experience when interacting with your email.

The table below lists email clients and the interactivity they support:

Personalization: The next level in rich media email

A survey of more than 7,000 consumers by Salesforce found that consumers are increasingly comfortable with personalization:

  • 57% of consumers are willing to share personal data in exchange for personalized offers or discounts;
  • 52% would share personal data in exchange for product recommendations;
  • 53% would do the same for personalized shopping experiences.

Yet while 58% of email marketers already use some form of personalization, few go beyond merge tags. In addition to personalizing email copy, personalized images can help strengthen the perception of tailored messaging:

In the above email by Lucozade, the hero image is personalized to include the name of the recipient.

Conclusion

Rich media in emails are an integral part of a visually attractive email. All rich media come with some costs—slightly longer load times, more data usage. But proper deployment can keep emails on brand, increase user engagement, and help differentiate your campaigns in a crowded inbox.

Some of the most advanced features—like embedded videos and CSS3 animations—still require appropriate fallbacks, but email client support will only grow.

The post Rich Media and Email: Maximize Impact (and Deliverability) appeared first on CXL.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Your current customers will never be excited about paying more. But that’s not why raising prices is so difficult.

Instead, poor planning is to blame: Companies neglect to plan a price increase until there’s a financial squeeze or, for the thirtieth time, a customer confides that, “You know, you really ought to charge more.”

What typically follows is a hasty price bump that’s detached from product value and communicated incoherently. To raise prices effectively, you need a strategy that limits risks—and maximizes rewards—of a price increase.

What’s at stake: The exponential impact of a price increase

Many focus on the risks of a price increase: What if you lose customers? What if it’s harder to close sales or generate leads? But the risks of not increasing prices may be equally large—or larger.

As Price Intelligently shows, static pricing gradually widens the gap between price and value, for you and your customers:

With static pricing, product value outpaces the cost to consumers. (Image source)

A fixed pricing structure not only reduces potential revenue—and, therefore, the money available to invest back into the product—but also impacts perception: Increasingly, customers will see your product as the “cheap” option.

Over time, the potential gain (or loss) in revenue can have an exponential impact. An oft-cited McKinsey report on the S&P 1500 suggests that a 1% increase in price can yield an 8% increase in profits:

A 1% increase in price can yield an 8% increase in profits, according to McKinsey. (Image source)

That impact outpaces other business changes:

Pricing right is the fastest and most effective way for managers to increase profits [. . .] a price rise of 1 percent, if volumes remained stable, would generate an 8 percent increase in operating profits (Exhibit 1)—an impact nearly 50 percent greater than that of a 1 percent fall in variable costs [. . .] and more than three times greater than the impact of a 1 percent increase in volume.

Some have challenged the universal applicability of that study, but other studies reinforce the outsized impact that a price increase has on profitability:

A third study shows the potential impact of more aggressive price increases: A 5% price rise increased profitability by 22%, more than equivalent changes to other “tools of operational management”:

If a 1% price rise can increase profits by 8–11% and a 5% rise increase profits by 20%, what about a 10% increase? Or 20%? When do returns diminish?

How much can you expect to increase prices?

The answer, of course, depends on your product and buyer (including which buyer among your many). Research comparing price increases to consumer happiness shows an unsurprising trend: the greater the price increase, the greater the impact on customer happiness.

(Image source)

The lesson, as Price Intelligently points out, is not that price increases are bad but that incremental rises—thoughtfully planned, effectively communicated—reduce the risk that you’ll upset loyal customers.

One reason companies mistakenly make a big jump? A low historical price.

The starting point for every price increase

Every proposed price increase is relative to the old one. The contrast between the two impacts consumer perception. Applied more broadly, this principle is known as anchoring.

Anchoring is often used in pricing pages that pitch several options, with the highest option serving as an “anchor” to make the others seem more reasonable:

For price increases, the anchor is the past price. And a low starting price limits your ability to raise it—even if you charge far less than your competitors.

As Price Intelligently explains:

If you anchor people at a low price and raise it later, then no one will see it as getting new value. They’ll see it as gouging.

You will lose the customers you win when you try to raise prices because they will have been acquired on faulty premises.

You can catch up with competitors who charge more, but you may need to do so over time. The challenge is most acute for companies with a freemium model: Users expect to pay $0, and starting to charge them—even a nominal amount—requires clearing a high psychological hurdle.

In addition to below-market pricing, there are other signs that it may be time to increase your rates.    

5 signs that it’s time to raise prices 1. Six months have passed.

Price Intelligently recommends one to two price changes each year:

The companies we’ve seen with the most success with revenue and adoption are reviewing pricing at least once per quarter and making tweaks or changes every 6 to 9 months.

Not every “pricing” change involves a straightforward increase: You can expand or remove pricing tiers, reduce or eliminate discounts, or make other changes, as detailed later.

2. You’ve added new features that consumers value.

New features should increase the perceived value of your product. As you roll out new features—and, in turn, more value—your pricing should keep pace.

As OpenView’s Kyle Poyar explains:

When you invest in creating new features and driving more usage of your product, it creates an opportunity to extract some of that added value in the form of higher prices.

Feature value should be determined based on consumer usage and feedback, not a company’s perception (or how much they’ve sunk into development). Customers pay more to get more value, not to increase your profits.

3. Everyone signs up.

New customers can help determine whether current customers would tolerate a price increase.

As Justin Gray writes, a 100% close rate isn’t cause for celebration—it means you’re not charging enough, especially if there’s no pushback or negotiation on pricing throughout the sales process.

If customers are surprised—even embarrassed—by how little you charge, you should charge more. For his consulting work, Karl Sakas targets a 60% close rate. He’s used price increases to bring that rate down from 80%:

During coaching sales calls, I’m seeing a somewhat lower close rate than before—in line with my 60% target, and less than the 80% I was seeing before. This confirms that I was under-charging. (If the close rate fell below 50%, I’d have over-reached on the price increase.)

4. You create ROI well beyond what you charge.

Successful price increases depend on matching cost and value. If you know, for instance, that your software saves a company hundreds of hours of technical labor—but costs just $49 per month—you can make a strong case for a price increase.

OpenView suggests that SaaS companies can capture 10–20% of their economic value, a baseline figure of what you may be able to charge if you can demonstrate ROI clearly.

5. You need the revenue.

This is the worst-case scenario: You need to reverse-engineer a consumer benefit to meet business goals. It’s also one of the most common scenarios.

Because the starting point is inverted—you’re making changes based on a company need not a consumer benefit—be exceedingly cautious with the scope of the price increase and how you communicate it.

Transparency is often the best policy—such as when Rad Power Bikes announced a price increase due to a 25% rise in import tariffs—but consumers are less interested in your problems than those you solve for them.

Ultimately, price increases have a broad impact, not just for consumers but throughout your company.

The company-wide impact—and unexpected benefits—of raising prices  

Even within your company, higher prices can be a stressor. They may:

  • Reduce close rates for sales staff, even if those reduced rates are desirable.
  • Make it harder for marketing teams to hit lead targets.
  • Temporarily reduce revenue, which small companies may not survive.  

Yet an announcement of forthcoming price increases can have unexpected benefits:

  • Existing customers are motivated to upgrade or expand their relationship before the change takes place—higher prices add urgency to subscription upgrades and renewals.
  • Current leads are motivated to purchase now.

Regular, well-communicated price increases can serve as a slow burn of urgency—every price has a half-life, and the product will always be cheaper this year than the next. (Some companies, Salesforce included, build annual price increases into their service-level agreements, often in the range of 5–7%.)

For agencies or consultants with limited hours to sell, a price increase can replace older customers with new ones who are happy to pay a higher rate. That’s exactly what Sakas experienced:

As year-end approached, I created a special offer for current clients: If they pre-paid for coaching in 2018 before the end of 2017, they’d keep the old rate (dating back to late 2015) for those pre-paid months in 2018, and then they’d pay the higher monthly price after that.

Nearly everyone opted to pre-pay at least a couple months, and a couple pre-paid an entire year.

The clients who didn’t pre-pay were typically ones who weren’t maximizing their coaching support. During the process, I’d recommended they switch to on-demand support (albeit at a less-responsive SLA, since they weren’t making the same commitment).

This dropoff created slots for new coaching clients at the higher price point, which I’ve since filled.

Sakas also noted that, for some clients, spending the extra money in the current calendar year allowed them to claim a tax deduction that effectively discounted the new price by 45–60%.

Whatever you do, don’t be Netflix.

Case study: Netflix’s “lost year” and a shot at redemption

In 2011, Netflix screwed up. Their well-chronicled debacle briefly split their nascent streaming service from the “cash cow” of DVD delivery. The separation of services effectively raised prices for consumers by 60%.

That poorly planned price increase cost Netflix 800,000 subscribers and plunged their stock price by 77% over four months. The failure resulted from several mistakes:

  • CEO Reed Hastings ignored others’ advice.
  • Netflix mistook a consumer preference for streaming content for a willingness to eliminate a DVD-based option.
  • The changes—and subsequent clarifications—were poorly communicated.

Netflix has since increased prices to far less outrage on several occasions, including last week. Every price change still makes headlines, but the narrative has shifted.

Instead of price increases being reported as cash-grabs or attempts to shove consumers into the future, they’re explained as necessary investments in the consumer experience.

Take, for example, Netflix’s statement about their most recent increase:

We change pricing from time to time as we continue investing in great entertainment and improving the overall Netflix experience for the benefit of our members.

Media outlets have covered the price increase exactly as Netflix wants. The write-ups are pseudo-advertisements, showcasing the most popular programs and reinforcing Netflix’s commitment to consumers:

The lesson was hard-learned but learned nonetheless. Here’s how to skip the painful part and get it right the first time.

The process of raising prices for existing customers

A solid process for increasing prices limits risk. If you own the project, and someone should, ”your job is to hedge as much risk as you possibly can going into a live test.”

Price Intelligently’s process for changing prices spans several weeks:  

Price Intelligently notes that most companies fall short during the middle phase. (Image source)

Below, we’ve combined their process with others that support well-planned, well-executed price increases.

1. Conduct initial research

Research your past price increases. What happened when you did? How many subscribers or repeat buyers did you lose? If lifetime value rose and customer acquisition costs decreased, Jeanne Hopkins argues, you probably got it right.

Historical research suggests what to repeat or avoid, and helps you gauge consumer expectations—years of single-digit prices increases followed by a double-digit rise, for example, may not go over well.

Compare yourself to competitors. Will a price increase move you into a new tier? Are you a mid-range provider that’s trying to get into the high-end market? You’ll need to adjust your value proposition and communicate that shift appropriately. (See Step 4.)

On the flip side: Are you the value option? A price increase may motivate consumers to consider newly price-competitive alternatives.

Ask the right questions. Dan Turchin, a VP of Growth Strategy at BigPanda, takes an open-ended approach to consumer research when preparing for a price increase:

We’re trying to have enough conversations with customers to get feedback on how they associate value with BigPanda and how to translate that into the most simple, transparent, logical way to consume the value.

Those conversations may come from product advocates in Slack groups or via customer surveys. In addition to open-ended responses, you can conduct quantitative research on price sensitivity.

Determine the price sensitivity. Price sensitivity is

a measure of the impact of price points on consumer purchasing behaviors, or in other words, it’s the percentage of sales you will lose or gain at any particular price point.

Because consumers are poor judges of how much they’re willing to pay—as they’re poor estimators of why they make decisions— there are two primary ways to gauge price sensitivity:

1. Price Laddering. On a scale from 1 to 10, customers rate their willingness to buy a product at a particular price. If they answer below a 7 or 8, the price is lowered, and they’re asked the same question again.

While the process can identify a price point at which consumers say they’re likely to buy, it can also introduce errors: Respondents may view the exercise as negotiation and suggest they’re willing to pay less than they actually are.

2. Van Westendorp Price Sensitivity Meter. Respondents are asked to price a product, with each answer having one of four implications:

1. At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)

2. At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)

3. At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? (Expensive/High Side)

4. At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)

The resulting data charts potential price points:

(Image source)

The Vanwestendorp Price Meter, unlike laddering, also reveals the price point at which consumers may view your product as cheap—where brand equity begins to erode.

Whichever method you choose, gather price sensitivity data for different cohorts—your enterprise SaaS clients may have a lower sensitivity to changes than your small business clients, or the structure of your changes (e.g. user limit) may affect one group significantly more than another.

Once you’ve done the research, you can work on your strategy.

2. Develop a strategy

Your pricing strategy identifies which pricing levers you plan to pull and by how much. Initial research— on the product features consumers value most and what they’re willing to pay for them— should help guide the conversation.

If you’re simply increasing the sticker price, the guiding principle is to match value to cost. Finding that balance is easier if you have data on the ROI from your services; failing that, use qualitative responses and price sensitivity research.

As detailed earlier, however, there are other, indirect ways to raise prices:

  • Increase restrictions on a freemium or free-trial version. The New York Times has reduced the monthly number of free articles available to consumers from 20 to 10 to 5, essentially creating a new paid tier for readers of more than 5 articles.
  • Shift benefits from one tier to another. Removing benefits from existing subscribers (by shifting them to a higher-priced tier) is risky but, nonetheless, an option. It may be easier to do if an entry-level tier has an exceptionally low price point or a feature gets a massive upgrade.
  • Use a new feature to create a new tier...
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Inbound marketing. Conversational marketing. The subscription economy. Growth hacking. Each term, now ubiquitous, had humble origins but a profound—and profitable—impact.

The phrases have defined brands. Those brands, in turn, have nurtured and sustained the intellectual capital of the phrase.

So how do you create one? Reverse engineering the origins of brand-owned terms is not a breakdown of the step-by-step planning that led from term identification to widespread acceptance. That didn’t happen.

Instead, the process is best understood as a narrative: each term born of deep expertise, attuned to the language of consumers, tested in various forms, and told persistently yet patiently—until it clicked.

Brand-owned terms: The long play

In retrospect, brand-owned terms feel inevitable: What could modern marketing be other than inbound marketing?

The reality, as detailed in later sections, is more complicated. The process isn’t formulaic, and it may take years to separate success from failure. What is clear is the close tie between the rise of term usage and brand awareness:

BrandBrand-Owned TermCorrelation in Search Interest
HubSpot“inbound marketing”0.82
Drift“conversational marketing”0.70
Zuora“subscription economy”0.92

Note: Omitted from the table is Sean Ellis’ “growth hacking,” which became a well-known term before it was associated with a brand.

Getting there requires patience, internal buy-in, and supporting content. Animalz’s Jimmy Daly, when describing content that could support a brand-owned term, uses the phrase “movement-first content.”

In contrast to “distribution-first content,” movement-first content is a conscious sacrifice of reach: “it isn’t beholden to any SEO tactics like word count and keyword density.” Instead, it is opinionated, often contrarian content that “has to pack a very real punch.”

Movement-first content, at least initially, prioritizes potential impact over reach. Image source

Notably, that investment in movement-first content may largely follow, not precede, the growth of a brand-term.

Here are the paths that four companies took—and the implications for those seeking to do the same.

HubSpot and “inbound marketing”

Lessons:

  1. Quick identification of the term doesn’t translate into quick adoption.
  2. Picking a term with a well-understood foil (“outbound marketing”) makes it easier to understand and differentiate.
  3. An eponymous book (e.g. Inbound Marketing) can help push a term into the mainstream.

As detailed in a co-authored book, Brian Halligan, a recent MIT grad in 2005, observed startups failing with “‘tried-and-true’ marketing techniques that he had seen work throughout his career; techniques such as trade shows, telemarketing, e-mail blasting and advertising.”

by That realization coincided with his observation that Dharmesh Shah, Halligan’s future co-founder, was earning visibility with a personal blog:

We started describing the way companies were traditionally marketing as “outbound marketing” and the way Dharmesh marketed OnStartups.com as “inbound marketing.” Our conclusion was that interruption-based, outbound marketing techniques were fundamentally broken.

The term “inbound marketing” benefitted from its natural contrast with outbound marketing. (Historically, both terms had narrower definitions related to telemarketing, which differentiated outbound versus inbound calls.) Additionally, Shah told me, “It was more specific than ‘Internet Marketing.’ More than just ‘Content Marketing.’”

While the term was new, many of the ideas aligned with “permission marketing,” which Seth Godin brought into marketers’ consciousness in 1999 and contrasted with “interruption marketing.” By the mid-2000s, however, its influence had already begun to wane:

The waning interest in “permission marketing” opened up intellectual space for “inbound marketing.”

That created intellectual space for Halligan and Shah, whose concepts of “inbound” and “outbound,” unlike Godin’s, were not entirely foreign to marketers.

HubSpot started in 2006, yet inbound marketing wasn’t mentioned on the homepage or within the “thesis” section of the early website. HubSpot finally shifted its language from “internet marketing”—the SEO-friendly choice—to “inbound marketing” in December 2007:   

The HubSpot homepage in late November 2007—with no mention of “inbound marketing.” In December 2007, HubSpot went all-in with inbound marketing.

Still, even in February 2008, Shah accepted that inbound marketing was an unfamiliar concept to his readers, mentioning the term for the first time on his blog:

The following year, in 2009, Shah and Halligan coauthored Inbound Marketing. By that time, interest in “inbound marketing”—and HubSpot—had begun to climb, with search interest in the brand-owned term rising more sharply than the brand for at least a year:

The widespread adoption of the term, Shah noted, benefited from a conscious decision not to own its intellectual rights:

We made the deliberate decision not to trademark it or try to “own” the term. We wanted the idea to spread (and the term to be used far and wide). So, in order to foster that, we let others use it as an industry term instead of something HubSpot specific.

In contrast to Shah and Halligan, who settled on a term early in the process, Drift refined its language for years.

Drift and “conversational marketing”

Lessons:

  1. Testing and refining a term until it catches on can work.
  2. An early sign of success is customer adoption of the language.
  3. You can postpone investments in collateral until a term take off.

“The most underrated piece of marketing advice is that you’ve got to name something,” Drift’s Dave Gerhardt told Leadfeeder. “If you don’t name something, then it doesn’t become real.”

For Drift, founded in 2014, that was easier said than done: Drift didn’t go all-in on “conversational marketing” until 2018. (The term was not new—you can find articles and even a book that covers “conversational marketing” from a decade prior—but it had no clear owner or widespread usage.)

While it took time to refine the language, Drift formed the kernel of the idea based on experiences developing their product. The company:

  • Knowingly entered a commodities market and needed a strong brand to stand out.
  • Identified a communication gap between customers and the sales and marketing teams as a primary pain point.
  • Knew that no one wanted to waste time answering hundreds of basic questions.

From a product perspective, the solution was a chatbot that had the potential to “filter out that noise.” From a brand perspective, Drift recognized the higher aspiration: “We realized what everyone wants to do is just have conversations.”

It took more than two years—and far more homepage designs—before “conversational marketing” made its way into the homepage headline:

January 2016

March 2016

July 2016

November 2016

May 2017

By September  2017, Drift had gotten closer, but a ”conversation-driven” platform, Gerhardt conceded, “sounds like a bunch of BS and jargon.”

September 2017

Finally, later in 2017, Drift found its term: “‘Conversational marketing‚’ Gerhardt said, “It’s the same exact concept, we just found a cleaner more relatable way to say it.”

The homepage shift—and preceding company growth—pushed “conversational marketing” into the mainstream:

The black bar represents the copy change on the homepage.

Success, Gerhardt noted, came when customers, not the company, used the term to talk about Drift’s product or, eventually, in unrelated webinars or articles:

Once we gave it a name, it clicked even more. It became easier for those businesses to tell other people, “Oh yeah, we use Drift. It’s conversational marketing.” We enabled them to tell more people because it’s easier.

Unlike HubSpot, which catalyzed brand-term growth with a book, the content collateral for Drift came after the homepage change:

Drift grew positioning into a philosophy. For Zuora, the scope of impact was greater and quantifiable.  

Zuora and the “subscription economy”

Lessons:

  1. Your deepest experience is your best source for trendspotting.
  2. You can’t fake economic trends; if you’re right, you’re right.
  3. The prediction of an economic shift—not just marketing tactics—has greater reach.   

Both HubSpot and Drift developed brand-owned terms that were central to their product positioning. Zuora, in contrast, predicted an economic trend that, for the company to thrive, needed to come true.

The foundation for Zuora’s ownership of “the subscription economy” is a single blog post, published by founder Tien Tzuo in May 2008, though it doesn’t use the term. As Tzuo wrote in that post, his experience at Salesforce was formative:

We didn’t realize that we would actually create a whole new business model for our industry as well: subscription services– the idea that you shouldn’t buy software, you should subscribe to it as a service.

Recently, I’ve started to ponder an interesting question: what if we weren’t alone in discovering the power of subscription services? What if the shift to subscriptions was not a trend limited to software, but one that is going on in many industries?

Tzuo’s argument for the coming economic change was pragmatic; he cited the potential economic benefits of subscription models and the impact of disrupters like Netflix.

Still, the Zuora tagline remained “Powering the Business Cloud” until July 2010, when Zuora began its campaign to popularize “the subscription economy”:

That tagline change was part of a broader effort: Weeks earlier, Tzuo published an article on VentureBeat promoting the term and announced it—to crickets—on Twitter:

The subscription economy is here http://bit.ly/daV4qd

— Tien Tzuo (@tientzuo) June 13, 2010

The timing was not inconsequential. Also in June, Zuora launched a new product,  Z-Commerce for the Cloud, at the GigaOM’s Structure event.

The same sites and commentators writing about the product launch also discussed Tzuo’s pitch for “the subscription economy.” By that point, the company had already raised tens of millions of dollars in venture capital, and the product and company were a platform to raise awareness about the term.

The term didn’t take off until 2015, when commentators, perhaps, looked for a way to describe the success of subscription-based products like Spotify, Amazon Prime, and, by then, Zuora:

The adoption of the “subscription economy” hinged on economic realities. Tzuo was right, which made the term not just a clever branding tactic but a real economic trend, one that warranted discussion in academic circles and research and reporting by industry stalwarts.

Compare that trajectory to others:

  • Inbound marketing has, effectively, become synonymous with marketing.
  • Conversational marketing may be remembered most for how it fueled Drift’s growth.

There’s another trajectory, too: The rapid rise of a brand-owned term untethered to a product.

Sean Ellis and “growth hacking”

Lessons:

  1. Terms that precede products may be more difficult to control.
  2. Without a product for your term, build a community.
  3. Choose wisely—the potential for misinterpretation will be realized.

Not every brand-owned term started with that intention, nor has every term-coiner retained full ownership. In 2010, Sean Ellis wrote a blog post that offered novel advice to startups:

So rather than hiring a VP Marketing with all of the previously mentioned prerequisites, I recommend hiring or appointing a growth hacker.

Ellis’ contrarian view cast aside the prerequisites—”an ability to establish a strategic marketing plan to achieve corporate objectives, build and manage the marketing team, manage outside vendors, etc.”—in favor of

a person whose true north is growth. Everything they do is scrutinized by its potential impact on scalable growth.

Ellis identified the perspective while, as a consultant, he struggled to find hires to fill his shoes. Without an associated product or company to keep the term in front of an audience, however, Ellis’ idea lay dormant.

Two years later, after a brunch discussion with Ellis, Andrew Chen wrote a blog post that moved “growth hacking” into the mainstream:

The black bar represents the publication of Chen’s post, two years after Ellis’ original article.

The rapid rise was not obvious even with the success of Chen’s post, which earned a couple dozen retweets and a few hundred social shares:

New blog post: Growth Hacker is the new VP Marketing http://t.co/vF0PCKWg

— Andrew Chen (@andrewchen) April 27, 2012

I'm loving the @andrewchen take on growth hackers – another great post! http://t.co/JwBwMibs

— Sean Ellis (@SeanEllis) April 27, 2012

Unlike others, Ellis didn’t have a product tied to the term before it took off, though he launched the community content-sharing site GrowthHackers months later, eventually adding a SaaS product and GrowthHackers conference. A book, coauthored with Morgan Brown, followed in 2017.

Controlling the meaning of the term has proven challenging. Many have interpreted “hacks” as a laundry list of shortcuts rather than a commitment to growth through experimentation and innovation, Ellis’ intended meaning.

Managing an already-popular term is a struggle some companies wish they had.

Copper and Dialpad: Works in progress or lost causes?

Other attempts to create brand-owned terms are fledgling:

  • Copper has staked a claim to “the relationship era,” though the term predates their use of it. It is popular but has only tenuous connections to the brand.
  • Dialpad is still working to make the term “anywhere worker” widely known. Search volume is effectively zero.

Copper, like Drift, has taken on an existing term; Dialpad, like Zuora, is betting on a large-scale market shift.

As Daly writes, these movement-first strategies are best executed in tandem with traditional strategies, which provide an undercurrent of marketing that focuses on near-term results while patiently awaiting—perhaps indefinitely—the liftoff of a brand-defining term.   

Starting points to create a brand-owned term

If you don’t have time and you don’t have buy-in, don’t bother: None of the examples outlined above—which count as some of the most successful in recent history—went from zero to common in less than two years. Most took several years more, with an interregnum devoid of encouraging signs.

Three strategies maximize the chances of success:

1. Use consumer research as the foundation

In Gerhardt’s interview with Leadfeeder, he emphasizes the value of user research as part of the discovery process—the right word or phrase likely originates with consumers, not the company.

Relying on user research to identify potential words or phrases can also validate the choice. “That’s when it really took off,” according to Gerhardt. “Not when we said it, but when we heard someone else say it back to us for the first time.”

Qualitative user research comes in many forms—interviews, surveys, live chat records, website comments, reviews. You may already have the data you need to frame a potential brand-owned term in words your consumers use.

As Drift eventually realized, the..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview