A collection of writers who contribute regular and varied thoughts on product management, product design, and product development. Writings for Product Managers and Non-Product Managers alike. 1000 Crowd-sourced writings about Digital Product Management, Product Design, UX and Product Development from over 200 of the industry's best writers.
Let me give some background about Ola if you’re not from India. Ola is an online cab aggregator which serves 2 million rides every day(Yup! roughly 23 rides every second) with 8 Lakh vehicles across 110 cities in 22 states of India. You can read more about Ola here.
The goals of this project are to
Improve the User Experience
Increase the Customer Retention
Attract new Customers
I’ve limited the scope of the project to focus on one segment of users i.e. Riders who are not the nativists of the state they are commuting through Ola. This is a kind of ethnographic study. The reason behind segmentation is that being a country with 29 states and 7 UT’s, most of the states have their own major spoken language. And we already know that Ola is operational in 22 states. Adding to the fact is that people are residing away from their native state for work in Industrial and IT hubs like Bangalore, Hyderabad, Chennai, Cochin, Pune and Gurgaon.
THE PROCESSRESEARCH — Problem Discovery
The main motive of the Interview is to not carried away with my own assumptions and to build the product based on Customer needs and User Centered design.
Why I went with Interviews rather than surveys and feedbacks?
I focused on gathering qualitative data rather than quantitative. It helped in finding some untold problems faced by the users which are not possible with close-ended surveys. And also surveys are not detail-oriented since it should be simple and short without eating much of user’s time. On the other hand, Interview helped in understanding the users better with detailed insights.
The Interviews were more of conversational rather than formal interviews. The outcome of the interviews was deep. The users came up with concerns about emotional factors, pricing, Competitive likes, etc.
2. Naturalistic Observations:
The second set of research activities includes observation of the app being used by users in their natural environment. Doing this kind of activities were a bit difficult for this project. Since Ola has an option called Ola Share (Social ride sharing by car pooling service), I had the opportunity to observe the way my co-passengers use the app. The results of the observations were as follows:
Users are finding difficulties in accessing the rides booked inside the app.
Non-Nativists ask a friend/neighbour who knows local language to call/speak with the driver for ETA’s and locations after booking. If no-one is nearby, they themselves have to call the driver. In this case, the first question they asked was, “Do you speak so-and-so language?”. If the driver knows the language of the user, It’s well and good. Otherwise, It becomes hard. Sometimes It even ends up in emotional stress for both riders and drivers like verbal fights.
Sometimes, Driver asks on-boarded Passenger to speak to another co-passenger for communication purposes (in Ola Share).
3. Contextual Inquiries:
The difference here is that I interacted with the users after the observation I’ve done in Naturalistic Observation. The results were seconding the observations from Naturalistic Observation.
FINDINGS — Problem Definition
I created some User Personas to jot down the demographics, motives, behaviours, pain-points of the users that I interviewed. From the Personas and other research methods, discovered the following problems:
Users and Drivers are facing communication problem to interact with each other due to a difference in languages.
Cluttered UI Compared to Uber’s simple UI. The app is usable but not effective.
Pricing doesn’t make sense. Ex: At times, The price of Ola Share(Pool category) is around 90% of the price of Ola Micro(Hatchback category).
IDEATION — Brainstorming Ideas
Based on the research discovery and problem findings, Brainstormed some ideas to work on. Some of them are:
App Redesign -On examining the Uber’s app, the process of booking is clean and simple focusing on one task at a time. But it’s not the case with Ola. There is a scope for a redesign of the entire app with respect to User Experience.
Pricing Strategy -I didn’t have enough data to work on the pricing strategy. Ola’s pricing strategy might be with respect to market fit. So a Big No to this idea.
Language Preferences -Another Idea is to introduce language preferences within the app (For both rider facing app and driver facing app). In the current user journey of cab booking process, there is a gap in the product. It fulfils the user needs till booking. After that, It directly goes to tracking the Driver vehicle/Pick-up point and the OTP part. It missed the major part of the booking process, the verbal communication between the driver and the rider i.e. the phone calls that drivers and riders make between them.
Since we were ready with some ideas, We had to prioritize on the ideas to implement. I followed the Value Vs Complexity Model to arrive at priorities of ideas to work on.
App Redesign -Redesigning the entire app from scratch will be a tedious process. Even though the business value is high, need to think twice before going ahead with this idea. The major impact is the need for the portion of users who were satisfied with the old app experience to learn the new flows completely.
Language Preferences -This is a new feature which we are going to add to the existing app. The complexity/effort will be low as compared with the other idea.
Prioritization of Ideas
If there was a clash of two or more ideas within the same quadrants, I would have gone with other methodologies such as weighted scoring or opportunity scoring to prioritize on ideas. But now, in this case, we were clear on which way to go ahead.
Even though we had prioritized the ideas, Still there was a need to figure out how the idea fits the market and the difference it brings to our product as compared with competitors (Uber in this case). So we looked at quantitative data that were available and Competitive Analysis to find out the difference it brings to our product compared to Uber.
We needed data to support the decision to move ahead with building the Language Preferences feature. Here, Data insights like the number of calls made for the purpose of cab booking will help. Fortunately, we were able to get the required data to support the idea.
Here are some of the data we got from TrueCaller about the call insights made for cab booking purpose.
2.5 billion calls made for cab booking purpose in 100 days.
Ola contributed to 103 million calls whereas Uber amounted to 39 million calls.
Stats about the Ola userbase with respect to demographics such as nativity, language and the current state of residence to arrive at the priority of languages to include in the feature.
Stats about the Ola driver partners with respect to the same factors mentioned above.
The number of customer tickets Ola received with respect to language issues.
The number of customers using Ola Select(a premium service) to decide on whether to include the Language Preferences feature only for premium users(helps to attract new customers with this feature) or for all(helps in retention of existing customers and to attract international visitors who predominantly uses Uber).
The major competition for Ola is by the international player Uber. Even though the rides served by Uber in India is less as compared with Ola, Ola can attract the Uber users with this new feature. And most of the International visitors who come to India use Uber for its popularity worldwide. This new feature of choosing the preferred language will provide an option for the foreigner to choose English as a preferred language and hence will add value in attracting international customers as well. So this feature is surely a market differentiator. Overall, It is a killer value-add for the users.
Strategic Fit with Company’s Vision:
Adding a new feature to the product will add no value if it is not aligned with the company’s vision. The feature should travel in the same direction as the company’s vision.
The proposed feature is in lines with the company’s vision to provide hassle-free, reliable and technology-efficient car rental service.
MVP with Rapid Prototyping
After doing all the analysis, we have finally reached the design part of our feature. The design has to be included in both the customer-facing app and driver facing app. Here are the screenshots of the newly designed mock-ups for the customer-facing app.
Mock-ups doesn’t portray the actual functionality of the feature. As I was done with the mock-ups, proceeded with the integration of the newly designed screens into the existing app.
Since I didn’t have the exact stats of the userbase and languages spoken, Created an MVP with 6 languages. The higher the languages you select, higher the probability of getting driver in the preferred language. Selecting the language doesn’t guarantee the user to get the drivers with the preferred language. It is subjected to availability and locality. And also if the ETA of the cab with preferred language seems to exceed 20 minutes, then the preference will be given to the nearest driver instead of the language. Hence we have added a disclaimer on the preference page to inform the user about it.
Once you book the cab, now the new booking details screen will have an extra label besides OTP to signify the driver’s language. It was designed on the basis of cognitive psychology.
We’ve reached the end of the project. With this, We’ve identified the problems by research, validated the problems and built a Rapid Prototype of MVP for the proposed feature. Will cover more on the launch, market strategy and success metrics to measure in future stories.
Wrapping it up — The Learnings!
Like a cab driver should speak the language of the customer, The Product Manager should speak the language of various stakeholders such as Customers, UX designers, developers and mainly business acumen. This project has largely helped me in learning the Customer needs, User-centered design, Rapid Prototyping using Marvel and Business values.
If you’ve made it up to here, Thanks for reading my ideas on adding a new feature to the Ola app. Hope you enjoyed it.
Connect with me on LinkedIn. If you’ve liked the story, Please share and give a clap or two.
Note: The above project is based on my personal interest for building products and product management. It is not affiliated with Ola or any other companies in any way.
This is the one that started it all. It’s the Patient 0 of PM venn diagrams. It begets all that came after it.
And, yeah, it’s pretty great. It’s clean, easy to read, and it’s a starting point for explaining what a PM does. But it’s clearly not enough on its own. Or we wouldn’t have hundreds of articlesaboutwhatproductmanagersactuallydo (let alone alltheQuoraquestions). And we also wouldn’t get the next 9 venn diagrams. So let’s be happy for its shortcomings.
2. The one in which the PM does 50% of everyone else’s job (as well as her own)From Catherine Shyu
I hope no-one shows this to my boss.
3. The one where the PM and UX Designer can’t decide who does whatFrom Pendo
All the confusion over who should do what — captured in a single GIF!
4. The one where the product owner gets a way smaller circle than the product managerFrom 280 Group
Sure, “owner” sounds more important than “manager”, but fortunately we have this diagram to prove otherwise.
5. The one without product management in itFrom helloerik
Technology, business, design? Come on UX, we see what you did there. And we’re not buying it.
6. The one where product management has the same size circle as … the futureFrom Clevergirl
I’m glad that someone truly appreciates the role of product management.
At it’s heart, building software product is a simple thing. Yes, the technical concepts required to build great software are incredibly complicated, and prolific and capable engineers are worth every penny they make because of it. In most cases, though, building great product is about more than just being capable of putting technical skill into practice. Not every great engineer has a successful product to their name.
Product Managers are the steering wheel to the engine of engineering. Powerful engineering teams can get a product moving fast and far, but a Ferrari without steering can easily end up in a lot of places you might not want it.
For those of you who are thinking about how you can get into the business of building great software, I beg you not listen to me too much. Rather, allow me to share the words of the titans upon which my industry stands —
This book outlines the way that stories can be written and organize in a way that can optimize development for the delivery of user value. It advocates thinking of a product as a set of user actions, and then considering the development tasks necessary to attain that end. Jeff Patton is one of the core thinkers when it comes to traditional Agile development, and while many modern development shops don’t build product exactly in the way described in this book, it does inform the basis of how product management as a practice has advanced to where it is today.
Don’t let the name fool you, this book is not just for garage-bound startups. It advocates the process of using the scientific method to develop product that allows you to learn about what your customers want, starting with super low-tech prototyping and working your way up. Developing iteratively is something I’ve found to be incredibly difficult to put into practice, but incredibly valuable the more it can be done.
This short article (~20 minute read), is a great thought piece on the state of product management, and how one might avoid some of the less obvious, yet very common pitfalls that many fall into regarding the role of a product manager. It talks about the distinction between data- and customer-driven product drivers, and the role of a product manager being a supportive role, rather than any sort of leadership role.
The Google Ventures Design Sprint is a beautiful way to think about initial product design. The design sprint is a 5 day process for getting from idea to initial requirements through a lot of white boarding, post-it notes, low-fidelity wire framing, and user testing. It’s a lot of the stuff that product managers use all the time, packaged in a way so simple as to almost be romantic. It’s an elegant way to put product design into practice in a way that also promotes iterative thinking.
This one’s a legend. Regardless of what you may think of Peter Thiel, he is one of the founding minds behind Silicon Valley as it exists today. His book, Zero to One, explores the nature of innovation, and talks about the benefits of 0 to 1 innovations — building things that are completely new to this world, rather than 1 to n improvements.
The title sounds a little strange, I’ll grant you, but The Four Hour Work Week is a great book to help reframe your mindset to be more entrepreneurial. The book proposes a way to spend less time working and more time enjoying life by virtue of optimizing workflows. As a product expert, you’ll quickly realize that there’s always more to do than there is time to do it, and it’s surprisingly easy to fall into the habits that don’t result in productivity. This book has helped me answer questions I didn’t know to ask — “Is this meeting necessary?” “Can I automate this process somehow?” “Do I need to check email every day?”
Thanks for reading! I’m Jack Moore, and I love writing about product.
Disclaimer — book titles link to my Amazon referral account
It started slowly. There was no warning, no call to arms, no time to prepare. This was not your typical invasion. It was, in fact, a systematic attempt to burrow under the surface and quietly take hold, like the tendrils of a plant as it climbs up a tree, like the spider which weaves a web knowing the flies will come.
“None of us could remember who first introduced them into our ecosystem but the fingers were immediately pointed at the Product Team.”
Looking back, the signs were obvious, hidden in plain sight. The innocuous square shape, the distracting colors, and the way they could attach themselves to various surfaces with a strip of stickiness ensured they went largely unnoticed. Their design was inspired, really, a confluence of ease, cost, and versatility that you couldn’t ignore.
None of us could remember who first introduced them into our ecosystem but the fingers were immediately pointed at the Product Team. They were the ones who had fallen prey to their clutches, their office Ground Zero for the invasion. Blame soon fell by the wayside, as the intruders started to leak out from the Product Team and into Customer Success and Sales. We started dropping like flies, rushing headfirst into the warm embrace of pink, of blue, of green, of yellow. Oh, that cursed yellow.
The reasoning had been so innocent, so simple. The Product Team were receiving countless nuggets of advice, pearls of wisdom, and ideas on how to improve their offering. They were so unprepared for the onslaught of feedback they received that they had no system in place to store it, to manage it. And so, they turned to the stacks — for that’s how they transported themselves — of the colorful squares.
“Soon after the Product Team made the fateful decision to write down the feature requests, you couldn’t move without seeing the squares. Blocks of color strewn around the room, like we were part of a painting that was not yet finished.”
Their usefulness was unparalleled. The Product Team found that they could etch their ideas into the intruders, leaving a permanent reminder of each and every piece of feedback they received. The colors allowed ideas to be codified into categories. The stickiness enabled the ideas to be stuck to walls, to desks, to computer monitors.
Of course, that’s precisely what the intruders wanted. Soon after the Product Team made the fateful decision to write down the feature requests, you couldn’t move without seeing the squares. Blocks of color strewn around the room, like we were part of a painting that was not yet finished. We urged the Product Team to do something about it. We told them that it was going too far. But the Product Team were ensnared by the intruders, who had planted the false belief in their heads that Post-Its meant progress.
The symptoms began to show themselves. The Product Team became lethargic and unproductive, as if time itself had stopped for them. The product became dated, increasingly useless to our customers, and that’s when the real troubles started. Customers began to look elsewhere, flocking to our competitors whose products were continually developing and improving. They had found a way to stop the invasion dead in its tracks. And it fell to me to find out how they had succeeded.
I ventured to our biggest competitor and I pleaded for the answer. Their Product Team’s office was clean and tidy, not a Post-It note in sight.
“It was some kind of software. They showed me how their customers could submit feedback, how they could then prioritize all of the ideas, enabling the Product Team to make data-informed decisions.”
“How did you do it?” I begged. “How did you defeat the Post-It notes?”
They could see I was desperate. “Look at this,” they said, pointing at a screen.
It was some kind of software. They showed me how their customers could submit feedback, how they could then prioritize all of the ideas, enabling the Product Team to make data-informed decisions. I couldn’t believe my eyes. This was it. This was the achilles heel of the Post-It notes. This was my secret weapon.
Upon returning to the office, with news of what I had witnessed, the screens were blank and the Product Team nowhere to be seen. A breeze blew gently through the office, whisking a Post-It from its resting place and over to my feet. I picked it up.
“There’s just too many of them!!!” was the message.
Too many ideas, too many post-it notes, and too many of us believing we were doing fine. This Post-It note had hit the nail on the head. I slumped down in my chair, surrounded by the enemy, and I realized that it was too late for me, but that I could warn others. I grabbed my pen, grabbed a Post-It, and wrote one solitary word, my ‘rosebud’, the solution to the Post-It note problem. That word?
Learn more about how Receptive can help your SaaS business to manage the product backlog and close the feedback loop with your customers. Book a demo today!
“Responsive web design is about offering a seamless experience on any device, and since different web browsers render web pages in different ways, websites must be tested to ensure that they’re compatible with a variety of mobile and desktop web browsers. Even though making a website scale to the correct responsive breakpoints is primarily the responsibility of a web developer, it’s the web designer that decides exactly how a responsive website will adapt to various screen sizes in order to create an optimal user experience.”
2.https://icons8.com/articles/ui-design-user-interface-illustrations/?ref=webdesignernews.com Expanding UI with Illustrations. Article focused on the importance of illustrations and infographics, to expand the reach and effectiveness of User Interface. This is something I’ve shared before, but this article showcases statistics around this strategy, and includes contextual examples for situations such as onboarding, while also including microanimations to further substantiate this approach. Highlight of the article includes:
“In the age of fast and massive information consumption, the role of visuals is growing. According to the explorations by S.Thorpe, D.Fize and C. Marlot on the speed of processing in the human visual system, it takes people on average 150 ms for a picture to be processed and 100 ms more to understand its meaning. Pictures are easier to remember and recall, their message is clear to people of different languages and regardless their ability to read. That is why infographics, icons, illustrations and other assets by graphic designers are used so widely.”
3.https://www.smashingmagazine.com/2018/03/future-mobile-web-design-video-game-design-storytelling/ Web Design Evolution, Video Game Design & Storytelling . Another great article hailing from Smashing Magazine, focused on the importance of storytelling as a means to convey the core message of a brand, specifically in web products. This article showcases principles long used in Video Game design to make this point across, namely by focusing on elements such as personas creation, symbol introduction and by creating mascots, among other relevant features. Highlight:
“Storytelling isn’t just relegated to big brands that can weave bright and shiny tales about how consumers’ lives were changed with their products. Nor is it just for video game designers that have hours of gameplay to develop for their audiences. A story simply needs to convey to the end-user how their problem can be fixed by your site’s solution. Through subtle design strategies inspired by video game storytelling techniques, you can effectively share and shape your own story.”
Some weeks back I received an email informing me my Skype credits would expire if I do not make a call or send an SMS within a time window. The moment of uninstalling Skype triggered a nostalgia about the days when many would appreciate instant messaging and VoIP calls as a marvelous invention. Vividly I remembered the euphoria of knowing telcos completely busted by a technology like Skype when it comes to long distance voice calls and the liberation of everyone from being held hostage for this need at exorbitant costs.
I could not remember how long ago did I start up my Skype application or even had a conversation with someone using Skype. Being in Asia and pampered by a plethora of ultra-powerful chat apps such as LINE, WeChat and KakaoTalk, the chat use case has evolved significantly from its “vanilla” days to ecosystems where people literally live off chat apps because they could do so much more than just chatting. In some of my public presentations around chat ecosystems I mentioned that “except walking your dog” these apps can do almost everything only to find myself wrong in a short span of time — the app itself cannot walk your dog literally but it has enabled peer-to-peer, resource-sharing features to allow people get part-time help to walk dogs.
No matter how successful a product like Skype may have been, the value of its disruption does not remain the same over time (see Kano model) and users are always looking for incremental value-add. For products to stay relevant the following are important markers that should not be missed.
Assuming users are happy all the time
Users are constantly exposed to options and loyalty to any product is difficult to build not to mention the retention. Assuming users are always happy with a product leads to complacency which is dangerous.
Optimizing too hard on certain features
Putting too much focus on a few important features creates myopia and misses on opportunities for testing and failures. The chat functionality has evolved from being standalone to an almost indispensable part of most popular mobile apps in China. E-commerce apps today don’t just allow people to search, browse and buy but play games and socialize as well. Job hunting websites go beyond listing employment opportunities but showcase people’s network profiles, offer learning choices and help people connect over professional events. These are examples of how a product can grow to become more relevant over time by its breadth and depth.
Missing out on bench-marking
There is a story about two friends sitting under a tree with a lion approaching them from a distance. One of them wondered who would outrun the lion to survive. The other person only cared by out-running his friend. Pragmatism and diligence can sometimes bring us a little further than what perfectionism can do. Setting the right benchmark can help tremendously in crafting products that stay continuously relevant.
In summary, very inward-looking product design and development does not lead to sustainability and should be avoided at all costs.
Horowitz lauds many of the qualities that are gained working in business, such as knowledge of the market and competitors and understanding of your own company’s position.
Yet, for many people working in start-ups or at VCs there is a reflexive distrust of people with business backgrounds. This distrust manifests itself, in particular, in some of the harsh (though funny) putdowns of business school graduates. From the founder of Mint:
When valuing a startup, add $500k for every engineer, and subtract $250k for every MBA.
A lot of this comes from the idea that business people aren’t good at doing stuff. They haven’t built anything the way engineers have, or mastered a specific craft the way designers have. And they don’t obviously fit with the start-up ethos — the whole “get shit done”, “move fast and break things” shtick.
If you have a a business background and want to break into product management, you’ll need to overcome some of these biases.
You’re going to need to show your hunger. You’re going to need a little luck. And you’re going to need to pitch yourself in a way that makes your background relevant to the role you’re going after.
Here are a few things you can do.
1. Figure out which PM skills you already have
The good news is, you probably have many of the qualities and skills that could make you an effective product manager. You just need to market them.
Worked in sales? You’re great at talking to customers. Spent time as a management consultant? You’ve helped some company or the other with their (digital or physical) product strategy.
While the skills you’ve built working in business won’t be relevant to all PM roles, it will be relevant to some of them. Every PM role is different, reflecting the needs of the company and of the product itself. Some companies want product managers with strong engineering ability as the products themselves are technically complex. Others want PMs who can refine their product’s user experience (especially if they don’t yet have a dedicated design team). Yet others want PMs with strong business skills to bolster the revenue and profitability of their products.
My own story: I studied philosophy for my undergraduate degree and then spent four years working in financial markets.
Early role models
When I got my first PM job, in my late 20s, I had limited technology or design skills. But I was very good at analysing markets, crunching data and creating new revenue streams — and that was what my employer, Reuters, needed at that time. I didn’t need to prove I was an amazing all-round product manager, I just needed to show that my existing skill set was a great match for the role.
2. Look for jobs that match both your skills AND your interests
You can get hired to do a job that is a fit for your skills, but to thrive you’ll want to find a role that’s also a fit for your interests. You’ll want to find a role in which you’ll spend time doing something you’re passionate about.
Uri Haramati talks about this from the hiring company’s perespective. The company should be asking ‘What are they missing in their product?’ and then looking at each candidate and assessing ‘What is his or her passion?’.
You can take the reverse view as a candidate. First map out your passions (there’s a nice little tool for this, based on Uri’s article). Then look at any company with a PM job posted and figure out whether they’d be the right place for you to pursue these passions. If you love crunching data, is this something that the company would value? Or do they already have a strong analytics function? If you’re excited about driving user experience improvements, is it something that you’ll spend a lot of your time on? Or are there already other PMs in the team with UX design backgrounds?
3. Be able to articulate your growth objectives
You’ll likely get asked in your interview, “what are you looking for in a PM role?” (If not, you’ll want to steer the conversation that way anyway!)
When you get there, you’ll have a chance to distinguish yourself by going beyond the generic answers. Pretty much every candidate will come up with some combination of ‘work with nice people’, ‘contribute from day one’, ‘growth opportunities’. You’ll want to go into specifics about where you want your career to go.
This is your chance to talk about what you want to do six or twelve months down the line. It’s your chance to say “I want to learn how to run design sprints and become better at rapid prototyping and user testing”. Or “I want to become great at running A/B tests, from generating hypotheses through to running proper statistical analysis on the results”.
Giving a specific answer to this question, will help you test the fit between your growth objectives and what a role might offer you. It will show you’re thoughtful about your career and that you have ambition beyond landing the job. And, if you do land the job, it’ll increase the chances that you’re given any project that comes up that will help you fulfil your objectives.
4. Understand how tech products create and capture value
Understanding tech is not about name-checking the latest start-up to get a gushing write-up in TechCrunch. Nor is it about lapsing into industry jargon to make you sound like an insider. It’s about understanding what makes products successful. It’s about appreciating how products create value (for users) and capture value (for the company that builds the product).
It’s about being able to walk into an interview or coffee and have an intelligent conversation about a product and its business model. What specific user need does it serve? What is it competing against to serve that need (what are the alternative tech and non-tech solutions)? How will users discover the product? When they discover it, will they pay for it? How much will they pay? How often will they pay (can they be retained)?
You’ll likely spend weeks and months thinking about these things before you land that first PM role — looking at products already in the market and figuring out why they’re succeeding or failing (or, for many of the new products released by the tech giants, why they exist at all!). To get started with this, it’s worth following the blogs from the two Bens: Evans (from A16Z) and Thompson (who publishes Stratechery).
You’ll also want to understand how PMs think about creating and capturing value with their products. Brandon Chu’s posts on prioritisation and decision making are great starting points for understanding the everyday challenges facing many PMs. (His Minimum Viable Product Manager is, also, in my view, the most illuminating of the hundreds of articles on what a product manager is/does).
If you found this article useful, please give it a some 👏 so I know I’m doing something right. And if you have any questions or thoughts, I’d love to hear them in the comments below.And you can follow me for more practical product management stuff. I write from experience, with real-life examples and data.
Building your first Machine Learning product can be overwhelming — the sheer amount of moving parts, and the number of things that need to work together can be challenging. I’ve often seen great Machine Learning models fail to become great Products, not because of the ML itself, but because of the supporting product environment. UX, Processes, and Data, all contribute to the success of a Machine Learning model.
Over the years of building Machine Learning products, I’ve come up with a framework that usually works for me. I break down a Machine Learning product into eight steps. Here they are at a glance:
Identify the problem There are no alternatives to good old fashioned user research
Get the right data set Machine learning needs data — lots of it!
Fake it first Building a Machine Learning model is expensive. Try to get success signals early on.
Weigh the cost of getting it wrong A wrong prediction can have consequences ranging from mild annoyance to the user to losing a customer forever.
Build a safety net Think about mitigating catastrophes — what happens if Machine Learning gives a wrong prediction?
Build a feedback loop This helps to gauge customer satisfaction and generates data for improving your machine learning model
Monitor For errors, technical glitches and prediction accuracy
Get creative! ML is a creative process, and Product Managers can add value
For the sake of my sanity and for better readability, I’ll split this post into three parts. This part covers steps 1 through 3. Here goes step 1!
1. Identify the Problem
Machine Learning is a powerful tool for solving customer problems, but it does not tell you what problems to solve. Before even beginning to decide on whether Machine Learning is the right approach, it is important to define the problem.
Invest in user research
There are no shortcuts to good old fashioned user research. Every successful product identifies a specific user need and solves for it. Conduct a thorough user research in order to identify the pain points of the user and prioritize them according to user needs. This helps to build a user journey map, identifying critical flows and potential roadblocks. Further, the roadmap is super useful for defining processes and flows that need to be modified for your ML solution to work in the first place. Here is a summary of what good user research should accomplish.
Once the problem is identified, we need to explore if machine learning offers the best solution.
Look for one of these characteristics in your problem:
Customization or personalization problem — one size does not fit all Problems where users with unique characteristics need to be identified are usually great candidates for machine learning.
Amazon’s recommender system identifies unique users who would be interested related productsNetflix uses a ‘match percentage’ to identify shows that most closely match your preferences
Personalization or customization problems look to identify specific users and cohorts that may be interested in specific content. They are usually driven by past user actions, user demographics, etc.
Repeatable sequence of steps Processes which require a repetition of the same sequence of steps once the problem is identified are usually great candidates for automation. For example, the entire ‘cancel your order’ flow in e-commerce can be automated if the intent of automation is identified. Further, recommendations for next steps can also be made based on a user’s last action.
Recognizes or matches pattern Look for repeated patterns that you can learn from. Spam engines, for example, index characteristics of spam messages (including text, subject line, sender information, etc.) and look at how many users marked a similar message as spam in order to to identify a potentially spam email.
Google’s Spam engine marks messages like these based on previous data and message characteristics
Learning from humans for a better experience.2. Get the right data set
The success or failure of machine learning relies on the coverage and quality of your data set. A good data set has two characteristics — a comprehensive feature list, and an accurate label.
Google Draw’s open datasets are awesome!
Features are the input variables to your model. For example, while building a recommendation system, you may want to look at a user’s purchase history, the closest matches for the products they bought, their buying frequency, and so on. Having a comprehensive feature list ensures that the ML model understands enough about the user in order to make a decision.
Labels tell the model the right from the wrong. A machine learning model is trained iteratively on a data set. In each iteration, the model makes a prediction, checks if the prediction is right, and calibrates itself for wrong predictions. It is thus important for the model to know whether or not a prediction is right. Labels convey this information to the model. For a recommender system, this label will be whether the recommended item was indeed purchased. Having the right set of labels influences the model’s performance.
Bonus: Some awesome resources to get you started into what labelled data looks like: * Kaggle datasets : Huge collection of labelled data on just about anything! * Quickdraw: Crowdsourced data on line images and strokes
3. Fake it First
Investing in machine learning models is expensive. Building a dataset with the right features and labels, training the model and putting it in production can range from a few weeks to a few months. It is important to get a signal early on and validate if the model will work. A good idea is to fake the interaction first, for a small set of users.
For personalization, have a list of items ready for a user to select, based on what they selected last. Simple rule-based engines are often the first steps to evolution into a more complex machine learning model. Some examples of rule-based recommenders:
If the user bought pasta, they probably need cheese too
A user buys toothpaste once a month, so we recommend it to them a month after their last purchase
A user always pays by credit card, so we surface that as the default payment option next time
It’s obvious that we cannot write simple rules to cover every case — and that is when the power of machine learning can be used best — but a few simple rule based proxies go a long way into validating the outcome of the machine learning approach.
For automating customer flows, it may be a good idea to ask a human on the other side to do exactly what an automation would do — and nothing else. Companies often test if they should build chatbot through a controlled release with a human answering questions on the other side.
The idea is to test if users respond positively to machine learning. While these techniques might not give the best results, they are important for getting a signal. Getting a signal early on can save time and effort, and help correct the vision and direction of the product. This is your best shot at guaranteeing returns on the investment put into building a Machine Learning system.
Phew! Writing that was more work than I anticipated. Let me know if this was useful so far, and what more you would like to hear. I’ll keep writing more about the approach, and your feedback will make future posts better.
You can use Jobs-to-be-Done to learn from your customers and measure a new product idea before building anything.
The Build-Measure-Learn Feedback Loop is a core tenet of the lean startup methodology, popularized by Eric Ries. The main argument is spending too much time behind closed doors, chasing down a perfect product, increases the cost of failure. To decrease the cost of failure and preserve resources to iterate towards growth, Ries and other lean startup practitioners suggest the following process:
Develop a hypothesis
Get a low-cost minimum viable product out to the market
Measure the results
Learn from the results
Iterate i.e. return to the build phase.
The assumption is the best learning occurs when people use your product. So, you build, measure, learn, and then repeat. The primary benefit is getting customer feedback and learning if an idea will work before it “gets too far,” i.e. before significant resources have been sunk into a product that customers never wanted in the first place.
We all know that building an MVP no matter how minimum it is, will never be free. And anyone who has tried to build an MVP, especially at a big company, has likely suffered from MVP-scope creep. Suddenly, people start calling the MVP “v1” and are asking engineers to participate in its development. And while they’re at it, they get marketing to whip up some creative to drive users to the MVP. The costs balloon.
The next frontier in decreasing the cost of product development and mitigating the risk of failure in your business and your role is to measure and learn from customer feedback before building anything at all, even an MVP.
Jobs-to-be-Done enables you to measure the likely results of an idea before the first line of code is written or the first pixel of a design is placed.
Here’s how you measure the value of a product idea before building it:
Determine what job customers would hire the product to do.
Determine which needs in the job your product idea satisfies and if those needs are unmet.
Compare how quickly and accurately your new idea satisfies the unmet needs to how quickly and accurately the existing solutions satisfy them.
What job would customers hire the new product to do?
“Your customers do not buy your product; they hire it to get a job done. The struggle with the job causes a purchase,” says Clay Christensen, author of Innovator’s Dilemma and Competing Against Luck. The first step in measuring your product idea is to identify your customer’s job, in other words, the goal your customer is trying to achieve.
Sometimes the goal is obvious. Salespeople want to acquire new customers or close new business. When it’s not so obvious, customer interviews can reveal the job-to-be-done.
A job is has a clear goal, an action verb, and direct object e.g. “get to a destination on time,” “acquire new customers,” or “create a mood with music.” Your customer is the subject of the sentence, not your company.
No solutions. Products, solutions, and technologies change over time. To maintain a stable target for your team, keep them out of your job definition.
The “wake up in the morning” test. It should make intuitive sense for someone to wake up in the morning with the job-to-be-done on their mind, thinking, “I have to do this today!” For example, unless you live in a city with alternate side parking, you don’t wake up thinking, “I need to park my car today!” But, you might wake up thinking “I need to be on time today!” The job is “get to a destination on time,” not “park the car.”
Identifying the job-to-be-done is the first step in learning from your customers before building.
Does your idea target an unmet customer need?
Once you’ve identified your customer’s job-to-be-done, you need to identify their struggle i.e. their unmet needs in the job.
What is a customer need?
Does your company agree on what your customer’s needs are?
In Jobs-to-be-Done, we define customer needs as an action a customer must take using a variable required to get the job done.
One way to think about customer needs is the actions that must happen quickly and accurately for the job to be executed successfully.
For example, in the job of “get to a destination on time,” three of the many variables are travel conditions, open times, and the distance between the parking spot and the destination. Here are examples of actions that need to be taken:
Determine if an alternate route should be taken due to unexpected travel conditions
Determine the open times of any planned stops
Find a parking spot close to the destination
If your decision to take an alternate route does not happen fast enough to take the route or you choose the wrong route (the decision is inaccurate), you will not get to the destination on time.
You can measure the speed and accuracy of customer needs in Jobs-to-be-Done, and the measurable needs are the foundation of measuring before building.
The measurement begins by determining which needs are unmet.
After interviewing customers to validate your list of customer needs in the job, you can run a survey asking customers which needs are important and not satisfied and their willingness to pay to get the job done. The interviews are learning from your customers. The survey is measuring.
Needs that have high importance and low satisfaction are unmet. The survey gives you quantitative evidence of what percentage of customers find the need to be unmet and whether or not they are willing to pay to have their needs in the job satisfied.
The Jobs-to-be-Done survey measures whether or not a problem is worth solving.
For example, before Waze launched we conducted a survey for the job “get to a destination on time.” The highest scoring unmet need in the results was “determine if alternate route should be taken due to unexpected travel conditions.” Waze satisfied this need and enjoyed accelerating growth.
Once you identify the unmet needs in the job-to-be-done, you need to determine if your product idea is targeting one of them. This is a major learning moment–you learn if your idea is solving a worthwhile problem or if it is a solution in search of a problem. And you have learned this before building.
Compare your idea to the speed and accuracy of the existing solutions
Assuming you learned in step two that your idea is attempting to satisfy an unmet need, it’s time to measure if it does so better than the existing solutions. Customers switch to a new solution only when it gets the job done better.
What does “better” mean?
Above, we noted that customers want to get the job done quickly and accurately. To pressure test this, consider the following: Are there any goals you have that you like to achieve slowly? Can you achieve them at all if every step you take in the process is inaccurate? Can you think of any successful products that helped customers achieve goals slowly and less accurately than the previous solutions?
“Better” is satisfying the unmet need faster and more accurately than the existing solutions.
To determine if our solution is good enough to invest in, we first identify competitive solutions, measure how quickly and accurately they satisfy the targeted unmet need, and then compare those metrics to the speed and accuracy of our new idea.
The competing solutions aren’t bound to similar products. It’s any product, service or manual solution that customers use to get the job done.
Before Waze launched, customers used Google Maps, traffic reports on the radio, and calling a friend to determine if an alternate route should be taken to unexpected traffic conditions.
If we were to measure the speed and accuracy of Google Maps in this scenario, we would use the product and write out the steps. Remember this is before the app had a feature that would automatically suggest a new route. To determine an alternate route, users had to:
Drag the route line on the map to a different road
View the new calculated time to destination
Compare it to the original route
Repeat for all variations until the fastest route is found
This took a few seconds to many minutes depending on how many variations the user was willing to try. The real killer here was the accuracy. It was so hard to identify alternate routes that people would stop this well before testing them all. Therefore, the decision to take an alternate route or not was often inaccurate.
Now, we have speed and accuracy benchmarks: seconds to minutes and inaccurate.
Waze’s auto-suggest feature was automatic and instantaneous, enabling you determine the alternate route before you had to make a turn. Furthermore, it was more accurate because it calculate the variations for the user.
The trick is that we can measure this idea before building it by determining how fast and accurate it would be if executed perfectly. If we determine it would be less fast and accurate than the leading existing solution, we have learned that the idea is not good enough and not worth investing in, even at an MVP level.
Learning and Measuring Before Building the MVP
At this point in our process, we have:
Learned the customer needs from our customers
Measured the unmet needs–problems worth solving
Learned if our product idea is targeting an unmet need
Measured the willingness to pay of the customers: is there a market worth targeting?
Measured the performance of the competitive solutions
Learned if our solution is better than the existing solutions and can cause people to switch to it if executed well in the build phase.
In short, we have measured and learned before building.
Either you will learn that your idea requires more refinement before its worth building anything, even a prototype, or you will have validated the idea and know which features are critical to include in the MVP.
In both cases, you have not only decreased the cost of a failed idea but you have also decreased the the cost of success.
As a wise man once said, “Anyone can prioritize feedback. It’s prioritizing feedback right that’s hard.”
Okay, so full disclosure, that wise man is me, and that’s literally the first time I’ve ever said that in my life. But the point still stands.
There are literally dozens of different ways in which product managers try to prioritize the feature requests and feedback that falls into their laps on a daily basis.
That’s the good news; you’re spoiled for choice.
The bad news? Well, I’m afraid that a lot of the tried ’n’ tested ways of prioritizing aren’t actually that effective when you scale and they can even do more harm than good, leading you to spend time on things that don’t have a positive impact for your customers or the business.
So I put together this list of methods that you’re better off veering away from when you start to build SaaS products at scale.
Consider this your warning…
The Kano Model
Named after its creator, Noriaki Kano, this model was first introduced in the 1980s.
It revolves around measuring how satisfied a customer would be with a feature, and how functional the feature is. You can then plot the features on a graph. The features that are both highly functional to the product, and would satisfy your customers, are the ones you build.
I suspect the popularity of the Kano model stems from the fact that it makes sense. It has a high level of face validity (and people told me my Psychology degree was worthless…).
But there are a couple of major limitations with the Kano model when it comes to SaaS products at scale:
Firstly, it’s a lot of hard work for both you and your customers. To get the information you need to make this model work, you have to subject your customers to tedious questionnaires that seem to go on forever and ever.
And even if you manage to get the data, how much does it really tell you? Pretty graphs don’t make MRR or solve your customers’ problems. You need insights. Data without insights is like a pizza without toppings — bland, uninspired, and honestly why would you not want toppings you weirdo?
The features that your customers rate in the boring questionnaire are ones that you’ve thought of. What if you missed something? What if you didn’t think of other features that your customers would like? What then?
As you can see the Kano model is rife with limitations. Let’s move on.
Buy a Feature
No, this isn’t my pitch for a new board game. This exciting sounding method involves people spending a set budget on features. This way you can see the features that they value the most.
If this sounds like a poor attempt at an ice breaker, that’s because that’s exactly what this sounds like. Now, I’ll be the first to admit that this is a clever way of getting your customers or employees involved with prioritizing your feedback. Everyone loves a game, right?
But, and of course there’s a but, the prize for playing this game isn’t as valuable as you might think…
The good part is that your customers are prioritizing. They can’t choose every single feature and so they’re forced to pick the ones they want the most.
But, once again, they haven’t submitted these features. It might be that they have other things they want that simply don’t appear in the game.
Besides, should you really be talking about features with customers anyway? Capture user problems, then design the solutions. If you talk in terms of features you’re missing a trick!
It also takes a lot of time & organizing. It generally works better face to face so that means you need to arrange for a group of customers to get together and play the game, and if you’ve ever tried to organize a games night with friends you know how hard that can be.
Buy A Feature looks fun, but fun doesn’t pay the bills.
Prune the Product Tree
Here’s another game. This one’s called Prune the Product Tree and it kind of sounds like something created by Mr Miyagi.
You start by drawing a large tree, maybe a nice sturdy oak, or a wintery pine. The thicker limbs represent core product areas and the outer branches are available features. You then ask customers or team members to place the features they want on the tree.
The idea is that you can then see if the tree is shaping up nicely, identifying the areas which are over or under-developed.
Again this only works in small groups and on a small scale. It doesn’t involve everyone or capture changes over time. Maybe, if I was feeling generous, I could advise using this to help organize your own thoughts. But you know what, I’m not feeling generous and so all I’m going to say is that this will not lead to customer insights and scalable feedback.
I’m all for planting more trees, but maybe not this sort, okay?
Value Vs Effort
Let’s move away from games before I turn green and go on a rampage. Next up, Value vs Effort. The way this works is simple. Each feature is assigned a relative value of how much it will bring to your product, and a relative cost of how much effort it will take your dev team to build.
So far, so good.
You can then plot features on a graph and immediately identify the features that are high value but low effort, and these should be your priorities.
Receptive actually includes a version of this method and our customers use it all the time. But that’s only one part of Receptive. Using Value vs Effort on its own can only get you so far. It’s kind of like trying to make a cake without sugar. It looks like a cake but it sure as hell doesn’t taste like one.
What if, for example, a feature is ranked as low business value and high effort? Chances are you’re going to ignore that feature. But what if 80% of your customer base really, really, really want that feature because it solves a huge pain point for them? Shouldn’t you take another look? Value vs Effort only gives you part of the picture so just be aware of that when it comes to your roadmapping decisions.
I don’t see the value in it, and it’s costing me a lot of time. I need to move on.
Ending on a High Note
The one thing that all of these prioritization methods have in common is that they are seriously time-consuming. That’s fine if you’re small, but they’ll hold you back if you are serious about product feedback and prioritization at scale.
Unless you’re a startup or small business, these methods will distract your product team creating a lot of unnecessary work for very little return.
That’s all from me, I’m off to play a fun game of Prune the Product Tree. Can’t wait…
Leading SaaS companies use Receptive to build winning products. Want to join them? — receptive.io