Growth and Customer Acquisition Guides by Brian Balfour
I'm Brian Balfour, Founder/CEO of Reforge, previously VP Growth @ HubSpot. I've started multiple VC backed companies, and grown user bases to millions of daily active users. I write detailed essays on growth and user acquisition that have been featured in Forbes, Hacker Monthly, and OnStartups to help you build a growth machine.
Superhuman was founded in 2015 (3ish years ago at the time of writing this). From their landing page, they are building “The Fastest Email Experience Ever Made.” To this day, you can still not sign up and instantly gain access to their product. Yet, they have received more press, word of mouth, and funding than 95%+ of other products. Why? Because they've done the exact opposite of what most do for product and feature launches. Let me explain...
Right now someone, or an entire team, inside your company is probably pouring a ton of energy and thought into a new product or feature launch. That plan might includes things like:
Releasing to 100% of user base and make it available to the public
Sending an email/notification to the entire marketing or user database
Launching on and getting to the top of Product Hunt, Hacker News, etc
Lining up Techcrunch and other press pieces
Writing a “launch” blog post on Medium announcing the product/feature
This effort is done with the best intentions. The hope is create the spark that leads the product or feature to escape velocity. But, what actually happens in is the opposite. Most launches create an incredible amount of head wind to growing and kills your chances for long term sustainable growth.
In August 2017, a social quiz app aimed at teens and high schoolers named TBH (To Be Honest) skyrocketed to #1 in the app store in just a couple months. Facebook quickly acquired them for $100M in October 2017. Nine months after being acquired, Buzzfeed reported on an internal memo from the TBH founders about how to execute product launches. The key quote is:
“It is critical to design a process that allows you to launch vastly different product experiences within specific communities so your product can reach critical mass.”
There is so much wisdom in this quote, that it spurred the idea for my presentation at the ProfitWell Recur conference. In this post I will:
Break down the four reasons why most product/feature launches hurt growth rather than help it.
Describe a step by step process on how to launch.
Go through a couple recent product examples who are using this process.
Why Most Product/Feature Launches Hurt GrowthReason one: you kill your most valuable+important channel
When evaluating the growth of companies I work on and invest in, I ask - If we turned off all acquisition spend/effort today, would we still grow? We will likely see a dip in the short term, but it should eventually level out and then start to go back up. That growth might be extremely slow, but if we saw that it means that we haveflat retention curves + solid word of mouth. Those two things provide the solid foundation to layer on growth efforts.
There are three common thoughts about Word of Mouth that are incorrect in my opinion:
Word of Mouth just happens.
You don't have control over Word of Mouth or are unable to influence it.
To create Word of Mouth, you just need to build a great product.
The key in this loop is step 2 - the product greatly exceeds expectations. In other words, there is a large positive delta between your product experience, and the alternative.
If that delta is incremental, the user will likely not be compelled to tell others about the experience. One part of creating a non-incremental delta is building a great product. But that is only half the equation. The other half, is who you expose the product too. Expose a great product to an audience it isn't built for, and you end up with an incremental or negative delta. This is exactly what most product/feature launches do.
The issue is that most product/feature launches don't have any controls around who they are exposing the product to, even though that version of the product/feature was hopefully built with a very specific target audience in mind. When this happens you end up with a lot user/customers that it wasn't build for who either have an incremental or negative delta experience.
Rationally, we think that users/customers will know that the product/feature hasn't built for them. But in reality, they don't and just form a negative opinion. In the end, you end up with a greater population of people who have neutral/negative things to say than positive which kills your Word of Mouth loop.
Reason two: You slow down validating your product/feature hypothesis
The goal of any product team is to validate their product or feature hypothesis as quickly as possible. There are two ways to get validation around your hypothesis, qualitative and quantitative.
For qualitative feedback, we might be looking at NPS or other information. Since most product launches are untargeted, we end up with a bunch of noisy responses that are end up being hard to sift through.
On the quantitative side, we are looking for flat retention curves for the product or feature. But once again, the typical un-targeted product launch bites us in the ass. With a much wider audience than what we have built for, our retention curves are likely to show an unhealthy trend.
The point is, if we put random garbage in, we are going to get garbage out. To validate our hypotheses we are going to have to dig through that garbage. That will make our insights slower and ultimately less accurate and effective.
Reason three: You deplete your ability to get your audience's future attention
In the Reforge Growth Series, we spend a week on thinking through the qualitative side of growth - understanding User Psychology. One of the many frameworks we dive deep on is Darius Contractor's (Head of Growth at FB Messenger), “Psych” Framework. A core part of that framework is thinking about your users attention and motivation like a fuel tank from empty to full. There are things that drain the full tank, and things that add fuel to the tank.
This hypothetical fuel tank is more fragile than ever. It is much easier to drain the fuel tank than to fill it. With everyone being over loaded with email, push notifications, etc, we are much quicker to start ignoring. That is exactly what most product or feature launches do. They notify the entire user base to try and jump start adoption, but what you are really doing is napalming your audiences attention. This is going to make it incredibly difficult to get your audience's future attention when you need it.
Reason Four: You fail to set yourself up for continued success
Unfortunately, we tend to think more about the launch than we do the machine that will keep growth going after the launch. Every product and feature needs a kick start (the launch), but most of the energy should be spent on building the engine that will keep on running. The launch is a means to an end, not the end itself.
How to Launch A Product or Feature To Enable Growth
So what to do instead? Let's return to that quote from the TBH founders:
“It is critical to design a process that allows you to launch vastly different product experiences within specific communities so your product can reach critical mass.”
There are are a few components in this quote:
Repeatable Process + Vastly Different Product Experience + Specific Community = The Chance To Reach Critical Mass. This looks like a five step loop.
Step one: Scope
The first step is to do the exact opposite of what most product launches do, scope your target audience way down to who you have initially built for. Specifically spell out a hypothesis of who that is. Most product launches start with “What are the places we can get attention?” The better way is to start with, who specifically are we trying to reach with this v1? Then, you can think about where those people “live.”
Step Two: Figure Out Access
Once we have our initial definition, we can then come up with a bunch of ideas on how to access that audience. This will differ based on what the initial hypothesis is, if it is a product vs feature launch, and if we are working with an existing user/customer base or launching something brand new. Email, Paid, Press, Medium, Product Hunt, Hacker News, Referrals, etc. This step doesn't matter as much as step one and step three.
Step Three: Filter
You'll never be able to target your audience hypothesis perfectly with any marketing mechanism. So we need to think about how take initial interest, and filter it to our audience hypothesis. There are a number of ways to do this, but the main ones would be:
Existing Usage Data - If you are working with an existing audience, you should have a good amount of data on them. Not just who they are, but what they have and have not done in your product. This is all valuable data to filter for your hypothesis.
User Submitted - Data submitted by the user. Key is to ask the right questions or collect the right data that will let you filter effectively. (See example in the section below).
Passive Data - There are a lot of tools like Clearbit which help us append data to understand more about who they are.
Step Four: Search for your success signal
As you let people through the filter and use the product , you look for success signals that validate your product or feature hypothesis. I think about them in three levels, each level requiring more volume, data, and time to get:
Qualitative - NPS, Very Disappointed Survey, etc.
Feature Market Fit or Product Market Fit - Healthy retention curves on a product or feature level.
Feature Product Fit - Casey Winters has written about Feature Product Fit. From Casey - “Feature/Product Fit requires the feature to improve retention, engagement and/or monetization for the core product. If it doesn't this means it is cannibalizing another part of the product.”
In a lot of cases, we don't find the success signals on our first try. Thats fine. Assuming we have done our filtering, it will be easier to find out why our hypothesis is wrong which will help inform us of where we should navigate to next.
Step Five: Leverage
Once you find some initial success signals around your hypothesis, it is time to leverage it into the next layer of audience you want to target. Think about it as a layer of concentric circles, starting at the center and expanding from there. At some point, you will have built up enough success signals, successful users with strong word of mouth, and other elements that you can remove all filters and swing the doors open.
Back to where I started this post, Superhuman. Just to be clear, I have no affiliation with Superhuman or know anyone on the team well.
Awhile back (I can't remember exactly when) I tried signing up for Superhuman. Here are the hurdles I had to jump through:
Join The Waitlist - I joined the waitlist by entering my email.
Initial Survey - I then completed a survey giving info like my company, size of company, role, etc.
Long Survey - I then at some point received another survey that was about 15 to 20 questions long asking me about what email client I used, how often I email, what add ons are vital for me, etc.
Follow Up Email Convo - I then received an email from someone on their team, asking me
Manual Onboarding - I then had to set up a time to go through manual onboarding for the product.
Most of you are probably thinking, “Holy Sh*t, this is the worst way to launch.” Before you draw that conclusion, you might want to look at the amount of word of mouth (like this, this, and this) and press (like this, this, and this) they have received because it is more than 95%+ of the product or feature launches I have seen. Not to mention the substantial amount of capital they have have raised over multiple rounds.
Superhuman is following the exact above process. They clearly have a specific hypothesis of who they are targeting with the initial versions of the product. They then have used a bunch of tactics to create a waitlist. They then have deployed a number of filters to make sure they are getting close to their audience hypothesis for their initial users. They are then looking for qualitative success signals via manual on-boarding (and I'm assuming quantitative ones internally as well). They are then leveraging that into the next layer of audience they build for and unlock and repeat the process again.
Note: Somewhere between creating this presentation, and publishing the blog version of this, the founder of Superhuman wrote a post on this process titled How Superhuman Built An Engine To Find Product/Market Fit. It is worth the read, and while targeted for startups launching initial product, the principles apply to any product or feature launch.
I write infrequently, but when I do I try to make it good. Subscribe to my blog here so you don’t miss a post. Or join Reforge with me and a community of other leaders.
But blindly buying into the concept of the one metric that matters (OMTM) is a fatal oversimplification.
In a recent essay, Casey Winters, formerly Growth at Pinterest, says:
“The search for one key metric for a complex ecosystem like Pinterest over-simplifies how the ecosystem works and prevents anyone from focusing on understanding the different elements of that ecosystem. You want the opposite to be true. You want everyone focused on understanding how different elements work together in this ecosystem. The one key metric can make you think that is not important.”
In this post, we’ll expand on Casey’s points, walking through why focusing only on your north star metric is a dangerous way to measure the growth of your business, and how teams should think about setting their metrics instead.
4 Reasons OMTM is Misleading
Even the name “One Metric That Matters” is problematic. It sends the message that you only need to focus on one metric to build growth into your product - this misleads many teams.
There are four key reasons that explain why buying into the one key metric philosophy can be deadly. Let’s walk through each reason in detail.
North star metrics are an output metric
When choosing which metrics to focus on, you must differentiate between output and input metrics.
Output metrics represent results and input metrics represent actions.
Output metrics help you set long term goals for the growth of your business - $6 million in revenue, 100k weekly active users, $10 million in MRR are all great examples.
Input metrics represent the actions that influence the output metric - 10,000 pageviews, 1,200 registrations, 700 upgrades from free to paid, for example.
You can’t focus exclusively on output metrics because they’re too big, too broad, and not actionable - they are a scoreboard. To win the game you need to focus on the individual plays that drive the score. Monitor output metrics to know how you’re doing, but build experiments around the input metrics you can directly influence.
Example: Spotify’s output metric
Let’s use Spotify as a hypothetical example. Users get the most value out of the app when they listen to songs, so a meaningful output metric for Spotify could be time spent listening to music.
If you were at Spotify and trying to come up with ideas to increase time spent, you’d quickly realize that trying to brainstorm against time spent isn’t productive. It’s too all-encompassing. This prevents it from being actionable. Time spent is the result of a set of actions. You need to determine what those actions are by breaking the metric down into layers of input metrics.
To do this you would ask yourself, “What actions could we take to lead to our users to spend more time consuming music?”
Two potential answers would be:
You could bring the user back to the app more often and/or
Get them to spend more time listening during each session
You could break those two options down even further. Identify the different actions you could take to get people to come back more frequently or to spend more time when they do come back. This level of inputs is closer to the product, and therefore more actionable. The goal is to drill down to define more granular inputs and build experiments to move them.
Output metrics are a lagging indicator
Input metrics are leading indicators and output metrics are lagging indicators. By definition, it can take time for the output to reflect positive or negative changes in the inputs.
Output metrics can hide growth problems percolating under the surface. By the time the problem surfaces as poor results, and you recognize you have a problem, the damage is done.
“Revenue retention is the output of engaged users. The usage is the input, and looking only at revenue retention has two big problems:
1. Revenue can hide what is going on under the hood with product usage, and shield you from signals about your product’s health over the longer term. You may earn a month or a year’s worth of revenue from a paying subscriber, but if that person isn't using the product, they will churn when that month or year is up.
2. If you are trying to improve retention but only tracking revenue retention, the game is over before you’ve even had the chance to play. Once a paying subscriber has churned, winning them back is almost impossible. If you want to improve retention you need to look at usage retention first.”
Let’s walk through another example to unpack this point.
An Analogy: The Ecosystem of the Serengeti
Pretend you’re an ecologist studying the health of the grassland ecosystem of the Serengeti. Since you know that lions are a keystone species, you focus on tracking their population. Let’s say, for the sake of this post, that the wildebeest is the lion’s predominant source of food.
But then, all of a sudden, a blight comes through and decimates the wildebeest population. If you weren’t paying attention to the wildebeests, you may not realize that the lions are in jeopardy - until it’s too late. They’ve started dying from starvation and you’ve already lost a lot of the population before you even realized there was a problem. (this is turning into a morbid analogy…but that’s the point!)
The health of any ecosystem can be understood by tracking the populations of different key species within the ecosystem and studying the relationships between them. The same goes for your metrics. A company is an ecosystem and your metrics are the various species that make up that ecosystem. Just like you missed the signals that the lions were in danger, you will miss the signals that your north star metric is in trouble, if you aren’t tracking its inputs.
Get More Essays on Growth, One Email Per Post.
Thank you for subscribing!
A single north star metric only captures one dimension of your business
Going back to the scoreboard analogy, optimizing against at a single north star metric is like looking exclusively at the score to get insight into how to win the game. Let’s say you’re a professional baseball coach - you’d also want to know the inning, numbers of balls, strikes, and outs, and the hits, errors, and pitch counts. If you only watch the score, you won’t know if you’re playing to win. You need to see how your team is performing across multiple dimensions of the game.
Similarly, because a single north star metric gives you insight into only one part of your business, you need multiple metrics to get the full picture.
There are multiple dimensions of a business that determine its health, and each should be measured by your key metrics. At a minimum, there are three key buckets for every product:
Breadth of retention
Depth of engagement
Shaun Clowes, VP of Product at Metromile and former Head of Growth at Atlassian, describes how his teams address the full range of these dimensions with their metrics:
“There's always a constellation of input metrics that we swap in and out under the umbrella of our output metrics, based on what we're learning at the time. The output metrics tend not to change much since they're valuable business outcomes, while the inputs change reasonably regularly. Though the output metrics don't generally change, from quarter to quarter, we may focus on different ones depending on what’s going on with the business.”
Constellation of MetricsExamples of output metrics: Slack and HubSpot CRM
A key output metric for Slack is DAU. Though this reflects retention, it doesn’t reflect engagement, or monetization. Their DAU number could be increasing, but what if none of those users convert to the paid tier? Or they’re barely engaging with the product?
For a team-focused product like HubSpot CRM, weekly active teams (WAT) could be a key output metric. Like DAU, it tells us how we’re doing with retention, but it doesn’t say anything about engagement or monetization. How active are those teams? Are they generating revenue?
A different kind of example: Pinterest’s output metric
In an effort to address multiple dimensions of the business, while still adhering to the philosophy of the OMTM, teams sometimes try to combine metrics. But this doesn’t work.
In his post, Casey walks us through the two issues that come from trying to consolidate metrics at Pinterest. He shares that at one point, in search of one key metric, the growth team combined two user actions - repinning (saving) and clicking content, into one north star metric. They called this a weekly active repinner or clicker, or WARC for short.
“[The WARC] ignores the supply side of the network entirely. No team wants to spend time on increasing unique content or surfacing new content more often when there is tried and true content that we know drives clicks and repins. This will cause content recycling and stale content for a service that wants to provide new ideas.”
Another issue revealed with the WARC metric was:
“The combination of two actions: a repin and a click… creates what our head of product calls false rigor. You can do an experiment that increases WARCs that might actually trade off repins for clicks or vice versa and not even realize it because the combined metric increased.”
Casey’s point here leads us to the fourth reason you should avoid the OMTM philosophy - the impact of “tradeoff metrics”.
North star metrics don’t account for the tradeoffs between metrics
Sometimes tradeoffs between metrics happen and you don’t realize it until it’s too late. You can’t just watch one of these tradeoff metrics; you need to watch both because as you improve one, the other might go down.
No metric exists in isolation. To truly understand how one metric impacts growth, you need to see its effects on other metrics downstream.
Example: Tradeoff metrics at LinkedIn
Let’s say you’re on the growth team at LinkedIn and one of your big goals is to improve ad monetization of the news feed. You might choose ad revenue per user to be your north star metric. To increase that number you could insert more ad spots into the news feed. But there’s an implicit problem with this - it would likely come at the cost of long term retention and/or engagement. If the team were to optimize exclusively against ad dollars, unchecked by retention and engagement metrics, this would kill the the news feed, and possibly the rest of the product.
The Right Way to Set Growth Metrics
For all the reasons we walked through above, focusing on one north star metric to grow your business is a dangerous proposition.
Here’s what to do instead:
1. Select a constellation of metrics
Select a constellation of a few key output metrics that capture the full dimensions of the business. Make sure they account for retention, engagement, and monetization, and then monitor the full scoreboard.
2. Break your output metrics into their input metrics
Once you’ve identified your key output metrics, build out the constellation by breaking those outputs down into their input metrics. Drill down until you’ve got a set of actionable input metrics that you can impact directly, and then build your experiments to move those.
Although input metrics are actionable, they don’t always drive improvements in output metrics. You need to be willing to discard them quickly if after some experimentation you find moving them doesn’t improve the output metric. The inputs that successfully improve the output metric are the leading indicators that you need to identify as quickly as possible.
3. Understand and monitor your tradeoff metrics
The next step is to look at the full constellation of metrics, figure out the relationships between them, and identify tradeoff metrics. Because many metrics are interdependent, for every metric you try to improve, determine where in your business you are likely to see a counter-reaction. Then, work to find a healthy balance between your opposing metrics.
Once you’ve worked through this 3-step process and built out your constellation of metrics, get ready to adapt. As the business grows and changes, you may need to focus on different areas, which may require new input and output metrics.
At the end of the day, this constellation of metrics reflects the health of your business, whether you track it or not. It’s good to simplify and focus as much as you can, but remember you’re never going to find a silver bullet metric, so be sure to zoom in as you experiment, and zoom out as you take a step back to look at your whole ecosystem of metrics.
If you’d like to learn more about how to choose the right metrics to track for your business and build a systematic approach to growth, consider applying for our Growth Series or the Retention + Engagement Series.
Get More Essays on Growth, One Email Per Post.
Thank you for subscribing!
More About the Authors
Brian Balfour is the Founder & CEO of Reforge and was previously the VP Growth @ HubSpot. Prior, he was an EIR @ Trinity Ventures and founder of Boundless Learning (acq by Valore) and Viximo (acq by Tapjoy). He advises companies including Blue Bottle Coffee, Gametime, and Help Scout on growth.
Shaun Clowes is the VP of Product at Metromile and was previously the Head of Growth at Atlassian. His approach to growth focuses on activation and retention with an emphasis on the thinking, processes and discipline necessary to grow through product features and engagement.
Casey Winters is an EIR at Greylock Partners. Previously, he was a Growth Lead at Pinterest and GrubHub. He advises companies including Tinder, Eventbrite, Reddit, and Pocket on scaling and growth, specializing in retention and engagement.