Follow Product Talk | Make better product decisions. on Feedspot

Continue with Google
Continue with Facebook


One of my favorite parts of Mind the Product London this past fall was meeting a product manager and a designer who I had coached in the prior year.

We met for afternoon tea and to my surprise they gushed about our time working together. They had so much they wanted to share about how their work had changed. They described how much time they were spending with customers and what impact it was having on their work.

It made me so happy. Immediately, I knew that I wanted to share their story. One of the challenges of teaching continuous discovery is that we don’t have very many public examples of what “good” looks like. So I asked Amy O’Callaghan (the product manager) and Jenn Atkins (the product designer) if they would be willing to share their story and they agreed.

One of the challenges of teaching continuous discovery is that we don’t have many public examples of what ‘good’ looks like. –

Amy and Jenn work at Snagajob, a company that is well on its way to becoming a continuous discovery powerhouse. I feel incredibly honored to have played a role in this transformation and I can’t wait to see how it turns out.

I’m going to let Amy and Jenn take it from here.

(By the way, if you are looking for women product leaders to speak at your conference, these two have a phenomenal story to share.)

Melissa Suzuno, my blog editor, conducted this interview.

Melissa: Tell me a little bit about your team. What are you working on? What are you trying to accomplish?

Amy: Our team has gone through a number of evolutions, but the core problem has always remained connecting people who need work with people who need workers in a satisfying way for both. We are currently fully staffed (hooray!) with 2 API devs (wizards), 2 UI devs (magicians), 1 QA (a saint), 1 product designer (unicorn) and a product manager (me).

We are working on some really cool stuff that creates transparency in the job search and hiring process that benefits applicants and hiring managers.

What was your life like before coaching with Teresa? How did you decide what to build?

Amy: Hunches, directives from leadership, and sometimes things we uncovered in discovery with our users.

Jenn: I was on a few different teams before we reorged and received training from Teresa. Within two years I had worked with three product managers. All were great, passionate and talented managers, however we never really had time to establish a true partnership and journey through discovery together. We were reorged every quarter and tasked with building out a feature. We had a quarter to get the MVP out and then never had a chance to make incremental improvements because it was on to the next feature. So you can imagine we had a lot of bloat and a lot of unfinished products we wish we could have made better.

How often did you talk to customers?

Amy: Once every few weeks seemed acceptable since our customers are traditionally very busy and hard to get ahold of.

Jenn: I typically talked with customers on a weekly basis, mostly through online screening (UserTesting and Ethnio) and sometimes in person at our office. We had a problem with no-shows for in-person interviews and scheduled calls for both employers and job seekers.

What did you think about conducting customer interviews?

Amy: They were awesome, but scheduling was a complete crap shoot. More often than not they’d have to reschedule or cancel at the last minute due to conflicts.

Jenn: That was my favorite part of my job, but also frustrating because the problems we would uncover through research didn’t align or translate into things that would help us achieve what we were told to build. Also, there were many times when the product manager was pulled into meetings that trumped research. So there wasn’t as much shared understanding going on. I would create research reports (8 hours worth of work) that no one would ever look at.

Customer interviews were my favorite part of my job, but also frustrating because the problems we uncovered didn’t align with what we were told to build. –

How often did you run product experiments?

Amy: Not often—we typically ran usability testing and then launched without much lightweight testing in the actual product itself.

Just before we started working with Teresa, a consultant from a well-known company came in to talk about agile product processes. He asked us to think of our time as a pie chart and what it would look like. Naturally I actually drew it (image left, below). Then he described what he believed it should look like, and I drew that, too (image right, below).

Amy sketched out how she spent her time and then how a consultant suggested she should be spending it.

That moment I felt a mix of frustration and disillusionment. Yeah, we all want to be doing more research, but it’s not realistic. With all the stuff we have to get done, no one can do that much discovery. I’d heard it before, and it was starting to feel like something consultants say to set a goal you can never reach, but that sounds good to your boss.

[Note from Teresa: I love how honest Amy is about this. I bet you can guess what’s coming…]

No one can do that much discovery. It was starting to feel like something consultants say to set a goal you can never reach, but that sounds good to your boss. –

Jenn: From a designer perspective, we were heavier on delivery (prototyping and usability testing) and meetings than true discovery within the problem space. It was more discovery in terms of trying to find the problem within the solution we were told to build. As a designer, we naturally want to find the problem and then solve for it; so this process felt a bit backwards. Plus it was hard to find passion in what we were delivering because we didn’t always agree this was the right problem to solve or the right way to go about solving for it.

It was hard to find passion in what we were delivering because we didn’t always agree this was the right problem to solve or the right way to go about solving for it. –

What was the process of working with Teresa like? What were some moments that stood out to you ?

Amy: Being on the hook for talking to at least one customer per week initially felt overwhelming, but it made us get scrappy and realize that there were more effective ways to reach people than those we were using. This was a huge step in developing the habit of getting to really know our users and their problems.

The opportunity tree was a BIG one. We use it at different places in the discovery and delivery cycle—sometimes daily, sometimes monthly. But at all times the tree is irreplaceable as a way to visually communicate all the discovery and thinking that goes into development and decision making. Stakeholders that used to throw curveballs into our sprints can now truly grasp the level of thinking (and testing) that has gone against our opportunities and are much less likely to interrupt our course.

[Note from Teresa: Yes! The tree is a great tool for communicating with your leaders.]

The opportunity solution tree is irreplaceable as a way to visually communicate all the discovery and thinking that goes into development and decision making. –

Learning to assess, test, and manage risks against your solutions was a lightbulb moment for me. I’d known it was part of the job and tried to execute it in the past, but having a systematic approach made all the difference to doing it effectively.

Jenn: The foundation of the triad going through the coaching process was key. It was refreshing and exciting to finally feel like I had a partner in product discovery. I truly believe this is what helped push us to the next level. Our lead dev at the time wasn’t able to be as engaged but it’s okay, we iterated on the process and just made sure to find ways to bring the devs on our squad along with us on the discovery journey.

It was refreshing and exciting to finally feel like I had a partner in product discovery. I truly believe this is what helped push us to the next level. –

Interview snapshots were a great way for us to quickly find similarities in problems our customers had. This became an epiphany for us as soon as we realized what we needed to focus on. For the first time we figured out what customers truly needed, and it was because we were able to read through what they were telling us and see what they actually meant. This started us down a course of discovery that has been truly invaluable to our squad’s product development.

Interview snapshots were a great way for us to quickly find similarities in problems our customers had. –

Opportunity Tree!!! I echo everything Amy said. For me, the concept was confusing at first, but I caught on once we started building it out. In hindsight I think it was also confusing because at the time we didn’t really have an outcome that was driven out of our discovery process. If it had been introduced after we had our epiphany about where to focus I think I would have caught on faster. Also, Teresa’s talk from Mind the Product in London was the best explanation of the process and it really clicked for me there in a way that I was able to start sharing the concept with other squads.

Kano model and opportunity assessments were key in helping us understand where to focus our efforts and took away the feeling of being overwhelmed with all the opportunities and solutions. We combined the Kano model with Jared Spool’s analogy of hot water and cookies to help us with our decision making as well.

What are things like for you now? What are you doing differently that you weren’t doing before? How has this impacted your work?

Amy: We’ve developed a belief that truly understanding the problem is more important than thinking of and pitching solutions.

[Note from Teresa: Nothing in the world makes me happier than reading that sentence.]

We’ve developed a belief that truly understanding the problem is more important than thinking of and pitching solutions. –

For the first time in my career I truly feel like a subject matter expert. If I don’t have qualitative or quantitative data to answer a question, I can get my hands on it. I feel confident pushing for things that we’ve prioritized because we know they will bring value—we aren’t taking nearly as many risks with development time as we used to, and the dev team appreciates that.

Jenn: Ditto to what Amy said! Also, I feel like a true product designer—not just an order taker. We started gaining trust and buy-in from stakeholders because they realize we have a deep understanding of our customers. Leadership sees the importance of keeping our squad intact and instead of moving us to different teams they’re giving us different problems to solve.

I feel like a true product designer—not just an order taker. We started gaining trust and buy-in from stakeholders because they realize we have a deep understanding of our customers. – 

The entire squad is excited, INCLUDED, and passionate about what we’re delivering. They’re very aware of what’s going on by the various ways we include discovery in our squad. Amy and I post updates in Slack. My part of daily standup is the after party where I show them prototypes of what we’re working on and get their feedback. They’ll pitch in ideas and sketches. I’ll start Slack calls so they can listen in on research and usability testing. They’ve come out to the field with us. We have quarterly two-hour Discovery Zone days where Amy provides breakfast/lunch and we present all of what we learned in discovery.

For the first time we are actually doing dual discovery and delivery. We’re incrementally improving our MVPs.

For the first time we are actually doing dual discovery and delivery. We’re incrementally improving our MVPs. –

Bonus: We recently joined forces with another tribe to tackle a problem similar to our current problem-space and have been able to quickly fall into our rhythm of dual discovery and delivery. We’re humming along and it feels great.

[Note from Teresa: Heck, yeah! This is what a continuous discovery team looks like—empowered, engaged, and aligned!]

How often do you engage with customers?

Amy: Weekly, or we start to get itchy and just call random people from our NPS screener.

How often do you run experiments?

Amy: Little manual experiments are run frequently, I’d say several times a month to several times a week depending on where we are in a cycle. Alongside that we also deploy fake door tests and other things that require dev work, but less often—more like a couple times a quarter.

[Note from Teresa: Note how Amy distinguishes between experiments that require code changes and those that don’t.]

Are you using the opportunity solution tree?

Amy: Oh heck yes! You can pry it from my hands when I’m no longer interested in solving problems. We’ve been iterating on it; weaving in journey maps, pain points, quotes, and all sorts of fun stuff, but it’s still the same core tree.

You can pry the opportunity solution tree from my hands when I’m no longer interested in solving problems! –

Jenn: YUP! Not only are we using it, we’re evangelizing it throughout our product org. I may or may not have called myself a Teresa Torres Ambassador and started planting opportunity trees throughout the product org. Amy and I have referenced it several times in our discovery share-outs and I’ve most recently presented alongside another researcher on the topic of observational research to our product org. It’s a one-hour session in which the first 30 minutes are about how to do observational research and the last 30 is a workshop on how to use the opportunity tree to get the most out of your discovery. We’ve done two so far within the last three weeks across all the offices and squads/tribes. What’s been really awesome is the response from the other teams sharing their trees and asking for feedback.

What have the results of this shift been?

Amy: More than a year later, my time frequently looks like that ‘ideal’ pie chart. Our sprints only include features and tasks if they are validated through discovery and usability first. Our developers’ morale is amazing and the team works together beautifully. We have time for tech debt since we are no longer furiously releasing half-baked features praying we find something that sticks. I’ve started to sound like one of those consultants selling impossible dreams when I talk about my time and my team. I’ve never had a more fulfilling time in product.

I’ve started to sound like one of those consultants selling impossible dreams when I talk about my time and my team. I’ve never had a more fulfilling time in product. –

I also measure Teresa’s impact on us in dead trees. I’ve never used so many sticky notes or notebooks before. (I’ve been through 2.5 notebooks in the year and change since we finished with Teresa—a personal record).

Amy measures Teresa’s impact on her team in the number of notebooks and stickies she now goes through.

Jenn: Our OKRs are formed around the opportunities and outcomes that come from our discovery and there are SO many opportunities for us to build products people love. I have input into the OKRs and it feels good to have earned a seat at the table.

Our favorite type of research is immersive, where we have the opportunity to live their life, understand what is most painful about the hiring/finding work process. We have deep empathy for them and can speak their language, which opens them up to working with us. We have developed a group of charter customers who are excited when we meet with them to see what we’re working on. They love seeing themselves in the product, where they’ve influenced us. They understand when we don’t build something exactly as they wanted because we’re building for the persona and not them specifically.

We delivered MVPs for three features that were born out of our discovery—which customers are actually using without us putting tours, callouts, tooltips on the page. We’re iterating, building and scaling.

Jennifer and Amy spent a month doing immersive discovery with Snagajob customer Chick-fil-A—and used this discovery to fuel three new features.

What would you say to another team who was considering working with me? Why should they do it? What should they know?

Amy: I’ve heard from some teams that the time commitment is hard. It is, but it’s strategically important. Our team trained with Teresa for three months. If it were significantly less time or less rigorous, we’d have been able to fake it without truly having to form new habits that would stick.

Be flexible depending on your own circumstances—we wound up including our developers in a different way than we learned with Teresa, but for the team we have today it’s the right fit.

Stick with it. Communicate frequently back to the organization to share your learnings so that other people can see the change and start to get excited too. If your leadership is bought in and you can deliver results from this training to back their confidence, incredible things can happen.

Jenn: I would also add that even if the team doesn’t end up adopting the opportunity tree, the fundamentals behind it are reason enough to work with Teresa. The coaching reset our team and everyone gained a shared understanding of how discovery and product development works. We were all on the same page and from that we were able to form a partnership. The left side of the brain came together with the right side of the brain (or you can slice it horizontally—either way, we need each other to be present and passionate for the magic to happen).

She opens your mind to think above and beyond the solutions; and that’s where your product can become a market differentiator and have true, valuable impact.

Final note from Teresa: This Q&A blew me away. I’m so proud of this team. They answered these questions a year and a half after going through coaching and they’ve continued to practice everything they learned. They have graduated from just being a highly effective continuous discovery team to also being advocates for continuous discovery within their organization. They are real change agents doing awesome work. Congratulations Amy and Jenn! And don’t let up now. I can’t wait to see what you do next.

The post What a Good Continuous Discovery Team Looks Like [Case Study] appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Meet Sally and Pam. Sally is a product manager, Pam a user experience designer, and they are working on a new mobile app.

They’ve conducted customer interviews, defined their MVP, and are now working through the initial designs.

Even though their MVP will only include a fraction of their near-term vision, Pam wants two weeks to work through the design of the near-term vision, as she’s worried if they build piece by piece, they’ll end up with a Frankenstein user experience. She wants to get feedback on the near-term designs before they start building the MVP.

Sally is anxious to get the MVP out the door as soon as possible and wants Pam to focus on those designs first.

How should they proceed?

Pam is right to be worried about the overall user experience. We’ve all been frustrated by products that feel like features have been cobbled together.

Sally is also right. They should be focused on getting their MVP out the door as quickly as possible. Taking two weeks to design feels too long, especially when the majority of the work won’t impact the MVP.

The Key Assumption That We Think Saves Time But Actually Wastes It

To resolve this conflict, we need to expose a key assumption at play here.

Pam is assuming that the near-term vision is mostly right. She wants to ensure that the design of the MVP is coherent with the design of the near-term vision. She doesn’t want to design A, without knowing how A will fit with B, C, and D.

This approach would be effective if we were confident that A, B, C, and D were the exact right things to build. We don’t want to design A in isolation, without considering B, C, and D. Otherwise when we get to B, we’ll need to redesign A, and when we get to C, we’ll need to redesign A and B. This would be very inefficient.

However, if we assume that our near-term vision and our MVP are mostly wrong, then Pam’s approach doesn’t save us time. Instead, we are doing two weeks of design work that will likely be wasted.

Even though decision making research suggests that we should be prepared to be wrong, it’s actually quite hard to do this in practice. Our egos get in the way. It feels like we are right and so we proceed as if we are.

Decision-making research shows that we should be prepared to be wrong, but our egos get in the way so we proceed as if we are right. –

But when teams instrument their products and honestly track the impact of their product changes, we see that we are wrong more often than not, even when we feel like we are right.

So if instead, we assume that we will get A wrong, or at least parts of A wrong, and that when we get to B, we’ll likely need several iterations to get B right, and so on, then taking two weeks upfront to design our near-term vision no longer makes sense.

If we assume we are likely to be wrong, Sally’s argument to get the MVP out as quickly as possible makes more sense.

The Value of Inefficient Iterations

Don’t assume your initial vision is correct. Use your iterations to build on your previous iterations.

But we also can’t ignore Pam’s concern. We do need to care about the overall user experience. This is why we iterate.

If our near-term vision includes A, B, C, and D, presumably we picked A as our MVP because it’s at the heart of the value that we intend to offer. If A doesn’t work as we expect, it puts B, C, and D at risk.

This means that we can safely ignore B, C, and D, while we work to get A right. Sally is right to focus on getting the MVP out as quickly as possible.

However, after we get A right, and we start working on B, our goal isn’t to ship B as quickly as possible. This is what leads to cobbled together designs.

At this point, our goal is to get A and B to work well together. That means we might have to change the design of A. And that’s okay.

This might feel inefficient, but it’s only inefficient when you get everything right the first time. This will rarely happen.

In the instances where we get at least some of the details wrong, this approach will be faster.

Don’t believe me? Let’s look at an example.

Suppose Pam spends two weeks designing A, B, C, and D. After launching the MVP of A, they iterate based on what they learned, and A morphs into something that no longer needs B, but instead needs E and also changes the way C needs to work.

Pam spends another two weeks cutting out the design of B, adding E, and iterating on C to reflect what they learned.

After the launch of A and E, they learn that E isn’t quite right, further impacting C. Pam spends a week iterating on A, E, and C.

You can imagine how this continues. Pam has to redesign everything each time they learn something new.

If instead Pam only designs A, she runs the risk of having to redesign A when she adds B. But here’s the key difference: While working through the options for how to make A work, she’s learned a lot about what users need from A, so she has a wealth of knowledge to draw from when modifying A to work with B.

In the previous scenario, where Pam is redesigning A, B, C, and D together, she hasn’t learned anything about A, B, C, and D yet, so her designs are guesses at best.

While Pam will still have to go through many iterations to get A, B, C, and D to work well together, each iteration is informed by the prior one. This leads to shorter cycles and faster overall design.

When each product iteration is informed by the prior one, it leads to shorter cycles and faster overall design. –

This is counterintuitive. Let’s look at why.

Most Teams Adopt a Validation Mindset

Like Pam, we tend to use customer interactions to validate our ideas. We believe it’s our job to design the solution and the customer’s job to sign off that it works for them.

As a result, we tend to wait until we are all done with the design before we get feedback from our customers. We expect our customers to validate that we got it right.

There are several problems with this validation mindset.

First, we get feedback too late in the process. Most product teams design just ahead of their engineers’ delivery cycle. What they are validating needs to go into next week’s sprint. If it doesn’t work for the customer, the team doesn’t have time to fix it. Even if it does work, but the customer has ideas for improvement (which they always do), we rarely have time to integrate them.

When we validate our ideas, rather than co-creating with customers, we get customer feedback too late in the process to integrate it into our product. –

Second, because of the escalation of commitment and confirmation bias, we are far less likely to act on our customer’s feedback even when we do have time.

As a refresher, the escalation of commitment is the cognitive bias where the more time and energy we invest in an idea, the more committed we become to it. If we do all of the work to design A, B, C, and D, we become committed to that design. Even those of us who have every intent to hear and integrate customer feedback will struggle with this.

And thanks to confirmation bias, the bias where we are twice as likely to hear confirming evidence than disconfirming evidence, we will miss most of the feedback from our customers that our idea isn’t working quite as we intended.

This is why we often see that even when we interview customers and usability test our ideas, we still find that our ideas didn’t have the intended impact when we release them.

This doesn’t mean that we should skip the interviews or the usability tests; it means we need to get better at both of these activities to work around our biases. (See my courses on customer interviewing and rapid prototyping.)

We need to get better at customer interviews and usability tests in order to work around our biases. –

It also means we need to drop our validation mindset and adopt a co-creation mindset.

Why Co-Creating With Customers is the Answer

A validation mindset stems from the belief that we know best when it comes to technology. And that is true. But our customers know best when it comes to their own needs.

We know best when it comes to technology expertise, but our customers know best when it comes to their own needs. –

Now some of you might be thinking of Steve Jobs who argued that customers don’t know what they want or that no one would have known to ask for the first iPhone. So let’s be clear on this distinction.

Customers don’t know what technology can do. They would have never asked for the iPhone because they didn’t know the iPhone was possible. However, they did know that they hated checking their voice mail, that texting using numbers was incredibly painful, and that small screens made it hard to find the contact you wanted to call.

Apple applied their technology expertise to solve these problems and many more with the first iPhone. There’s no way they could have done that without learning that these were important problems in the first place.

Successful products are the result of technology expertise applied to real customer needs. Co-creating with customers allows you to ensure that you are building something that your customers want or need.

Successful products are the result of technology expertise applied to real customer needs. –

So what does co-creating look like?

Unlock the Power of Co-Creating Solutions

We make product decisions every week, therefore we need to engage with our customers every week. Many teams struggle with this. They argue that they they can’t turn around designs fast enough to engage with customers every week. But this is a validation mindset creeping back in.

We can’t finish production ready designs every week, but we can and should be iterating on last week’s work. If we drop our validation mindset and adopt a co-creation mindset, we can get feedback from our customers while we are still in the messy middle of iterating on our design.

Instead of asking our customers, “Does this design work?” when we get to a final design that we are happy with, we can show our customers three or four design ideas that we are playing with. We can ask them, “What do you think of these options?”

This subtle shift addresses both of the problems we identified above and has two added benefits.

First, we are getting feedback from our customers much earlier in the design process. It’s much easier to iterate on sketches and wireframes than to iterate on production-ready designs. So when our customers give us feedback, we are much more likely to integrate it.

Second, we are less prone to escalation of commitment and confirmation bias, because we have invested less time into each idea. We are also exploring a compare and contrast decision rather than a whether or not decision, which is going to help guard against confirmation bias.

In addition to solving our two problems, we also get two added benefits.

The first added benefit is that when we share less-polished designs, and especially when we share multiple options, our customers are much more likely to give us honest feedback. It’s clear to them that we are still designing and they will be less concerned with hurting our feelings.

And the second benefit is that they will be more likely to jump in and share new options that we didn’t consider. Now some designers might fear that this will lead to “design by committee.” You don’t have to use your customers’ options. But you will learn a lot about their needs from the options that they suggest—and that is priceless.

How can you shift your mindset from validation to co-creation? If you are interested in practicing your co-creation skills, check out my Rapid Prototyping course.

The post Stop Validating & Start Co-Creating was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When you hear the term “rapid prototyping,” what’s the first thing that comes to mind?

For most of us who work with digital products, we tend to think of wireframes or mockups. We equate prototyping with a quick way to get user feedback on our designs.

But when we think of rapid prototyping and usability testing as one and the same, we tend to underestimate the power of prototyping.

When we conflate rapid prototyping and usability testing, we tend to underestimate the power of prototyping.

We will explore the different uses for prototyping, including usability testing, but first, let’s start with a definition.

What is a Prototype?

It turns out this is a complex question.

Merriam-Webster defines a prototype as “a first, typical, or preliminary model of something, especially a machine, from which other forms are developed or copied.”

But I don’t love this definition. It doesn’t capture many of the things that we commonly associate with prototypes: tangibility, an intent to learn and iterate, an intent to test and evaluate an idea, the act of thinking by doing, and so much more.

Kathryn McElroy, the author of Prototyping for Designers, gets closer. She defines a prototype as “a manifestation of an idea into a format that communicates the ideas to others or is tested with users, with the intention to improve that idea over time.”

I like that she captures the intent to improve the idea over time, that to me is at the heart of prototyping, but she’s still missing a critical element of the definition—the ability to simulate the experience a product or service would create if it were to exist.

Todd Zaki Warfel, the author of Prototyping: A Practitioner’s Guide, argues, “A prototype, regardless of its fidelity, functionality, or how it is made, captures the intent of a design and simulates multiple states of that design.”

I like that Warfel captures the simulation element, but I disagree with his requirement that a prototype represent multiple states. He’s trying to distinguish a software prototype from a software mockup, because in his world these are two different things.

I disagree. During the early days of Palm, the founder Jeff Hawkins carried around a wooden block in his pocket to test the ideal size for the initial PalmPilot. As they worked through the design constraints, engineers would inevitably argue for more room. But because Jeff had been carrying around a wooden block, he knew exactly what was too big and what was just right.

This is a great example of a prototype that was designed to answer a specific question (i.e. how big should a PalmPilot be?) and simulated the experience of carrying a PalmPilot in your pocket long before it existed. Note that this prototype didn’t have multiple states as Warfel requires.

(I originally read about Jeff’s wooden block in Chapter 3 of Bill Moggridge’s Designing Interactions. You can also read about it here.)

Let’s see if we can combine the best of each of these definitions and get to the heart of what a prototype is.

A prototype…

  • simulates an experience,
  • with the intent to answer a specific question,
  • so that the creator can iterate and improve the experience.

A prototype simulates an experience with the intent to answer a specific question so that the creator can iterate and improve the experience. –

With that out of the way, let’s move on to why prototyping can be used for far more than just usability testing. We can use prototypes to think, to answer questions, to communicate, and to inform decisions.

1. Prototyping Helps You Think
  • “Doing is the best kind of thinking.” – Tom Chi

I love this quote by Tom Chi because it’s easy to fall into the trap of trying to think our way to a solution. But in most instances, we are better off prototyping our way to a solution.

Let’s turn to the research for why. Bryan Lawson and Kees Dorst in their book, Design Expertise, develop a five-step model to explain how designers work. We are going to focus on two of their five steps—moving and evaluating.

Because most design problems are complex, we can’t always see the consequences of our design moves ahead of time. Lawson and Dorst argue that as designers work they are constantly moving back and forth between the moving and evaluating steps. We make a design move, which allows us to see the consequences of that move, and that in turn helps us make our next design move.

We’ve discussed the idea of visualizing our ideas before. If you remember my article, Why Drawing Maps Sharpens Your Thinking, you already know that externalizing your ideas takes the burden off of your working memory, allowing you to spend more cognitive energy evaluating your ideas.

Creating an external, visual representation of your ideas takes the burden off your working memory, making it easier to spend cognitive energy evaluating whether your ideas are any good. –

But to be clear, this isn’t as simple as drawing a solution and then evaluating it. It’s more nuanced than that. The back and forth movement between moving and evaluating happens dozens if not hundreds of times while working through a single solution.

You’ve probably experienced this yourself. Have you ever sat in a meeting and discussed an idea that sounded great while you were discussing it, but as soon as you got back to your whiteboard or computer to draw it up, you realized it just wasn’t going to work?

Odds are, you didn’t have to throw out the whole idea. But as you sketched through the idea, it morphed and evolved into something that could work. This is the back and forth movement of moving and evaluating.

And this is how externalizing your thinking through drawings, models, and prototypes helps us think. The act of creating helps us both generate more ideas and better refine the ideas we generate.

2. Prototyping Helps You Answer Questions
  • “Prototypes show you what you know. More importantly, they show you what you don’t know.” – Tom Wujec

As we explore potential solutions, we encounter a series of questions. Good product teams use prototypes to get fast answers to those questions.

After generating dozens of ideas, we might ask, “Which of these concepts look most promising?” At this point, we might sketch to see how each idea might work in practice. Many teams use design studio exercises to quickly answer this question.

If you aren’t familiar with the concept of a design studio, it’s the practice of presenting and critiquing a series of ideas as a means of refining and selecting the best ideas.

As the concepts evolve toward potential solutions, we encounter many more questions. We need to dig deeper than the subjective question,“Is this idea good?” to questions about:

  • Desirability: Does anybody need it? For what purpose? How important is that need? Is our solution 10x better than other solutions?
  • Usability: Is it the right size? Is the text legible? Is there enough contrast in the colors? Can people find their way around? Do they understand the form controls? And so on.
  • Feasibility: Is it technically possible? Can we make it fast enough? Is it secure? Is it compliant? Can we get buy-in from business stakeholders?
  • Viability: Will people pay for it? How much? How much will it cost to produce it? To maintain it? What are our base unit economics? How does that scale over time? How will we acquire users / customers?

Prototyping can help us answer all of these questions.

We are accustomed to rapid prototyping to answer usability questions. This is by far the most common practice for prototyping.

But we can also prototype to test desirability. The car industry uses concept cars to assess the desirability of new models. Lean startups use landing page tests to assess the desirability of their services before they commit to building them.

We can prototype to test feasibility. We can share concept prototypes with business stakeholders to get their feedback early and often. We can patch together third-party services to simulate our expected outcome to see if we run into any compliance issues. We can run machine learning experiments to see if our learning algorithm will be good enough.

We can prototype to test viability. We can pre-sell our product or service. We can produce small batch prototypes to understand the inherent costs to build. We can simulate what we plan to build with a third-party service so that we can start learning right away what it costs to acquire customers.

Thanks to the rise in popularity of The Lean Startup and design thinking, we have countless examples of prototyping to answer specific questions.

The key here is to remember, your goal is to simulate a part of the experience in order to answer a specific question. Too many teams try to simulate the entire experience and overwhelm both their testers and themselves and get far less value from their prototypes.

With prototyping, your goal is to simulate a part of the experience in order to answer a specific question. Try to simulate the entire experience and you’ll get overwhelmed. –

3. Prototyping Helps You Communicate
  • “Words leave too much room for interpretation.” – Todd Zaki Warfel

In the mid 2000s, with the dawn of Web 2.0 and the rise in popularity of AJAX, it got harder and harder to write good product requirements documents. Interactions were becoming far too complex to describe and many teams started to adopt prototypes in place of requirements documents.

This was a huge step in the right direction. We can’t possibly describe everything that software needs to do.

Jeff Patton captures this perfectly in his book, User Story Mapping, with the following image:

He argues that we need to externalize our thinking to develop a shared understanding across our team. He’s right and prototypes can help us do that.

But this goes beyond communicating requirements. How do we describe a new product or service that has never existed before to a potential user or customer? How do we advocate for funding and resources for that same never-seen-before product or service?

In both instances (and many more), prototypes can help us communicate. While most of us know that it’s better to show not tell, prototypes encourage us to simulate. Simulating an experience is even more powerful than showing or telling someone about the experience.

We can see this evolution in the way startups share what they are working on. We’ve seen most evolve from a website that explains what they will build (telling) to a vision video that shows how a user or customer might benefit from the upcoming product or service (showing) to more and more companies using Wizard of Oz and concierge techniques to allow customers to experience the product before it’s built (simulating). And we see the same strategy in play with SaaS companies that offer free trials or freemium plans.

4. Prototyping Helps You Make Better Decisions
  • “You innovate by building things.” – Tom Wujec

We talk about decision making a lot here at Product Talk. That’s because at the end of the day every product team has to decide what to build next over and over again. It’s these decisions that impact whether or not we drive outcomes that create value for our business by creating value for our customers.

If you’ve been a long-time reader, you already know that we want to avoid “whether or not” decisions and instead create “compare and contrast” decisions. Prototyping can help us do that.

Before we get to how, let’s revisit a key cognitive bias—the escalation of commitment. This is the phenomenon where the more we invest in an idea, the more committed we become to that idea.

The challenge escalation of commitment presents with prototyping is that if we only prototype one idea at a time, we tend to become fixated on that idea. It’s harder for us to accept critical feedback and we tend to barrel ahead as if we didn’t prototype at all.

Instead, we want to apply the concept of “compare and contrast” decisions to prototyping and prototype multiple ideas at once. The design literature refers to this as parallel prototyping and the research shows that teams that parallel prototype outperform teams who sequentially prototype. And it’s because they don’t fall into the fixation trap.

Prototype multiple ideas at once to avoid the fixation trap. –

If you prototype and test multiple ideas, you aren’t stuck answering the question, “Is my idea good or not?” where if it isn’t you are left without anything to build. Instead, you can ask a much better question, “Which of these ideas looks most promising?” and you’ll always have something to build next.

And because you relied on prototypes to answer questions, you’ll have much more reliable data to inform your decision than if you stuck with the more subjective questions, “Which of these ideas do I like best?”

Bringing It Home

If you’ve only used prototyping to answer usability questions, I’d encourage you take a minute and consider how you might use it to answer desirability questions, feasibility questions, and viability questions.

Similarly, if you’ve never used prototypes to improve your thinking, to communicate your ideas, or to help you decide what to build next, I encourage you to think about how you might include prototyping in more places.

If you’d like help doing that or simply want to learn more about prototyping, I’m offering a four-week, practice-based course on prototyping running from February 9th to March 9th.

In this course, we’ll cover how to:

  • Use sketching and design studios to rapidly generate and evaluate many ideas.
  • Prototype to answer specific questions, helping you evaluate your most promising ideas.
  • Test your prototypes with users, customers, and key stakeholders in a way that generates reliable and actionable feedback.
  • Iterate based on feedback, turning your mediocre ideas into great ideas.

Each week, students will get:

  • 15–30 minutes of required instructional material (e.g. a mix of articles and videos)
  • A prototyping activity that is designed to reinforce the learning and will be time-boxed between 30–45 minutes.
  • A one-hour practice session with 3–5 other students that allows students to experience a design studio, get feedback on their prototyping strategies, and learn from each other’s experiences.
  • Additional resources for students who want to invest more into each topic.

This course is designed to fit within your busy schedule. You can expect to spend 2 hours total each week on the course (but you may want to invest a little more time exploring the additional resources as your schedule allows). In weeks 2 and 3, you may also want to invest a third hour when we get into building prototypes, but this is optional.

If you are interested, you can learn more here.

The post 4 Powerful Ways to Use Rapid Prototyping to Drive Product Success was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hi there, Product Talk readers! I’m Melissa Suzuno. I’ve been helping Teresa with Product Talk content behind the scenes for the past few years. You may have noticed my handiwork if you’re a fan of the Oxford comma or em dash.

If you’ve been reading Product Talk for a few months, you probably remember that this past October, Teresa announced a brand-new Continuous Interviewing course in her blog post, “This Keystone Habit Will Fuel the Rest of Your Continuous Discovery Habits.”

But if you’re new to Product Talk or that blog post has faded away in the vast sea of material you’ve read over the past few months, don’t sweat it. I’ll give you a quick recap.

Teresa described a few concepts that she took into consideration when designing her Continuous Interviewing course. Here are the highlights:

  • Deliberate practice: In his book Peak: Secrets from the New Science of Expertise, Anders Ericsson describes how deliberate practice is what separates experts from novices. Deliberate practice requires focus, feedback, and pushing yourself out of your comfort zone.
  • Keystone habits: This concept comes from The Power of Habit: Why We Do What We Do in Life and Business by Charles Duhigg. A keystone habit drives the adoption of other habits. For example, many people find that once they begin exercising regularly (a keystone habit), it’s much easier to pick up other healthy habits like drinking water or eating more fruits and vegetables.
  • Continuous interviewing is a keystone habit for product teams. Based on her experience as a product discovery coach, Teresa has observed that continuous interviewing is the habit that sets the tone for other continuous discovery best practices. Once product teams adopt continuous interviewing, they find it much easier to follow other best practices like rapid prototyping and frequent experimenting. They do a better job of connecting what they are learning from their research activities with the product decisions they are making.

Continuous interviewing is the keystone habit that sets the tone for other continuous discovery best practices. –

Teresa also outlined which skills students could expect to focus on in the Continuous Interviewing course, writing:

“The course is designed to help you build a continuous habit of interviewing, and more importantly, get some deliberate practice with the skill of interviewing so that you get more from each interview.

You’ll learn how to ask the right interview questions so that you get actionable insight from your prospects and customers.

You’ll learn how to improve your active listening skills and take better notes during the interviews.

You’ll learn how to synthesize what you are learning from each interview using interview snapshots, making it easy for you to act on what you are learning.

You’ll learn how to automate the recruiting process, removing the biggest hurdle to continuous interviewing.

And, most importantly, you’ll get a minimum of 4 hours of deliberate practice (more if you want it) to hone your skill.”

Now that the first cohort has completed the course, we wanted to report back on how they found the experience.

We gathered feedback from everyone who participated in the course: 50 product managers, product designers, UX researchers (and a few others). Here’s what they had to say about the experience.

On the overall course structure and flow:

“It was spread out well enough to retain and add to the knowledge and practice it all so that I think I probably actually learned it. Not too challenging or too much commitment from a homework perspective, which was also positive. I was excited to be involved the whole time.”
David Straight, Designer at McGraw-Hill Education

“Extremely satisfied with the quality and time length of the course. Optional reading was great idea. It fits in neatly with work and is achievable.” – Simon le Maistre, Head of Product at Reallyenglish

On the benefit of deliberate practice:

“It has been really useful for me to have the guinea pig group to practice with and learn from. I feel much more confident to try this with real customers than if I’d just read some blog posts. As you said in the initial pitch—deliberate practice is how you learn!” – Tam Finlay, Head of Product at Farewill

“The lesson was helpful in understanding that interviewing takes practice and that mistakes still get made—but product teams can still learn valuable things from those interviews.” – Cyndi Hosch, UX Researcher at Choice Hotels

‘Interviewing takes practice… but product teams can still learn valuable things from those interviews.’ – Cyndi Hosch, UX Researcher at Choice Hotels –

On applying what they learned:

“Was good, actionable content. I was able to utilize these methods immediately in my real life.” – Rob Adams, Product Manager at Greenlight Guru

“I like the concept of the snapshot, as I think it could really benefit my team moving forward. It helps to focus on one memorable quote, the main quick facts about the person, and the most important insights and opportunities that you took away from the conversation (instead of pages and pages of notes or long report-outs). This will help with the continuous discovery process that we’re starting for our product team, as we can easily adopt the snapshot into our own process and even print them out to hang on the walls in our office. Over time, it will help to see the trends with insights and opportunities.” – Kate Thompson, Manager, UX Research at Choice Hotels

‘We can easily adopt the snapshot into our own process… it will help us see the trends with insights and opportunities.’ – Kate Thompson, Manager at UX Research –

“This week really brought it all together for me and I got to use it in my interviews at work.” – Kaelin Burns, Senior Product Manager at ZEFR

“Teresa’s course has been a wonderful reminder of how critical the contextual interview is to your discovery practice. The small digestible education components of articles and videos paired with the immersion of a weekly remote classroom with industry peers made for an awesome shared understanding experience.” – Larry Thacker, Product Designer at CarMax

Now it’s your chance!

Sound like something you’d like to be a part of? You’re in luck! Teresa is offering the Continuous Interviewing course again from January 5–February 2. Registration closes on December 15, so be sure to act soon. You can read more about the specifics of the course and sign up here.

The post The Results Are In! Feedback from the First Continuous Interviewing Cohort was written by Melissa Suzuno and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We are starting to pull together our 2018 conference list.

This list is by no means exhaustive. If you know of a product conference that is not on the list and you think it should be, please feel free to recommend it in the comments or send an email to conferences@producttalk.org.

I haven’t decided what events I’ll be attending in 2018 yet, but as soon as I do, I’ll update this post. We’ll also keep updating this list as we hear about more events.

Are you looking for the 2017 Product Conference list? Find it here.

Date Name Location
January 4-5 Front UX Product Management Bootcamp Park City, UT
Date Name Location
February 3-8 IXDA Interaction Week 2018 Lyon, France
February 12-13 Product Management & Innovation 2018 San Francisco, CA
February 20 The UX Conference 2018 London, UK
February 27- March 1  ConveyUX  Seattle, WA
Date Name Location
 March 5-7  UX Immersions: Interactions  Newport Beach, CA
 March 10  Product Camp Portland  Portland, OR
 March 11-15 ACM SIGIR CHIIR  New Brunswick, NJ
Date Name Location
 April 10-11  Habit Summit  San Francisco, CA
 April 19-20  MTP Engage  Hamburg, Germany
 April 21  ProductCampRTP  Cary, NC
 April 25-26  GOTO Chicago 2018  Chicago, IL
 April 26-27  DDD eXchange  London, UK
Date Name Location
 May 8-11  Craft Budapest, Hungary
 May 8-11  Sirius Decisions Summit  Las Vegas, NV
 May 17-18  Agile & Beyond  Ypsilanti, MI
 May 21-25  Techstars Startup Week Detroit  Detroit, MI
 May 22-25  User Experience Lisbon  Lisbon, Portugal
 May 23-25  UX London  London, UK
 May 31-June 1  Front UX & Product Management Case Study  Salt Lake City, UT
Date Name Location
 June 3-8  Agile Dev West  Las Vegas, NV
 June 13-15  UX Scotland  Edinburgh, UK
 June 18-19  Agile Australia 2018  Melbourne, Australia
Date Name Location
 July 16-17  Mind the Product San Francisco  San Francisco, CA
Date Name Location
 August 6-10  Agile 2018  San Diego, CA
 August 21-24  UX Week 2018  San Francisco, CA
Date Name Location
 September 4-6  Atlassian Summit Europe 2018  Barcelona, Spain
 September 25-26  Product Innovation Summit  Boston, MA
Date Name Location
 October 1-3  Business of Software Conference of USA  Boston, MA
 October 1-3  INDUSTRY  Cleveland, OH
 October 15-17  Agilia Budapest  Budapest, Hungary
October 18-19  Mind the Product London  London, UK

The post 2018 Product Conferences was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I love working as a discovery coach.

I work with dozens of teams at several companies spanning many industries. I coach each team—a product manager, a design lead, and a tech lead—for three months, working with them virtually week over week.

During that time, we focus on developing their research skills (e.g. conducting customer interviews, running sound product experiments, building rapid prototypes) and critical thinking skills to connect their research activities to their product decisions. The goal is to find the quickest path to driving measurable product outcomes.

It’s a lot of fun. I enjoy the work and the teams that I work with get a lot out of it.

Coaching, however, is a high-touch solution. While that’s great for learning outcomes, it means that it’s only available to the small percentage of companies who are willing to make this investment.

I got into coaching because I wanted to create a service that I wish I had had access to early in my career. But I never worked at a company that would have invested in coaching.

So while I love coaching (and will continue to do it), I’m starting to explore options for individual product team members who want to invest in their skills, but might not have the buy-in or support from the rest of their team or company.

I hear from product managers, designers, and engineers every day who want to invest in their own product discovery skills, but work at companies that aren’t working this way yet.

And until today, I didn’t have an answer for you.

The Value of Deliberate Practice

If you are like most of the product people that I meet, you are hungry to master your craft. You’ve read the books, you keep up on the latest blogs, and you attend the big conferences.

And, most importantly, you aren’t afraid to start doing something that you’ve never done before.

I love it when I meet a product manager who read about customer development interviews and just started doing them. Or a designer who learned about landing page tests and added them to their experiment toolbox.

We all know that the more you do an activity, the better you get.

But this is only true to a certain point.

Do you remember Anders Ericsson? He studied the differences between experts and novices and summarized what he learned in the book Peak: Secrets from the New Science of Expertise.

Ericsson argues, when it comes to developing skill, doing the skill over and over again only helps to improve your skill until it becomes automatic.

After that, if you want to improve, you need to focus on deliberate practice.

Deliberate practice has the following key attributes:

  • It involves breaking a skill into its component steps.
  • It requires focus—your full attention.
  • It requires feedback.
  • It requires getting outside your comfort zone.

These elements are what allow your brain to slow down and improve the skill. They override the shift to automatic behavior and allow you to keep growing your skill.

The teams that I coach get this week over week. So as I started to think about how to help individual product team members who wanted to invest in their skills, I knew that deliberate practice needed to be a part of it.

Improving product discovery skills requires deliberate practice. –

Your Job is a Marathon, Your Practice Needs to be a Sprint

If you haven’t guessed yet, I’m launching a new course.

I learned when I ran my Map the Challenge course last year that product people are busy.

Several of you expressed interest in that course, but were scared away by the time commitment of three to five hours per week for ten weeks. That’s like a college course.

It was too much. I get it.

So for my next iteration I’m cutting the time commitment way back. I’m designing my new courses to better fit into your busy schedule.

Now this is a challenge, because deliberate practice takes time. So here’s how it’s going to work.

  • There will be up to 30 minutes of reading or video watching each week.
  • Plus one hour of practice time with some of your classmates each week.
  • And rather than ten weeks, the course will run for four weeks.

If you can spare 90 minutes each week for four weeks, this course is for you.

If you want more, there will be plenty of opportunities to go deeper. Each week will include tips for how to get more practice time beyond the required hour, if you want it. Each of the readings will refer you to additional readings, if you want them.

You can scale your time commitment up or down based on what’s going on at work that week.

The Keystone Habit that Drives Continuous Discovery

I’ve noticed an interesting pattern amongst the teams that I work with. There’s a habit that once teams develop it, the rest of their continuous discovery practices fall into place.

I can’t back this up with research yet, but I believe that there is a keystone habit that helps teams accelerate their discovery.

If you aren’t familiar with the concept of a keystone habit, it comes from Charles Duhigg’s book, The Power of Habit: Why We Do What We Do in Life and Business.

Duhigg argues, “Keystone habits start a process, that over time, transforms everything.”

They are habits that, once adopted, drive the adoption of other habits.

Keystone habits, once adopted, drive the adoption of other habits. –

For most people, exercise is a keystone habit. When we exercise regularly, we naturally tend to eat better, we have more energy, and so we are more productive at work.

For many, making your bed each morning is a keystone habit. It sets the tone of rigor and discipline from the start of your day. This is why many military leaders advocate for this habit.

To be clear, it’s not that exercise makes you eat better or making your bed makes you more disciplined, but doing the former makes the latter easier. The keystone habit builds motivation for the subsequent habits.

I’ve noticed this exact pattern emerge amongst many of the teams that I coach.

When a product team develops a weekly habit of customer interviews, they don’t just get the benefit of interviewing more often, they also start rapid prototyping and experimenting more often. They do a better job of connecting what they are learning from their research activities with the product decisions they are making.

I believe continuous interviewing is a keystone habit for continuous discovery.

Continuous interviewing is a keystone habit for continuous discovery. –

Introducing a Short, Practice-Oriented, Continuous Interviewing Course

I’m offering a short, practice-oriented course on continuous interviewing.

The course is designed to help you build a continuous habit of interviewing, and more importantly, get some deliberate practice with the skill of interviewing so that you get more from each interview.

You’ll learn how to ask the right interview questions so that you get actionable insight from your prospects and customers.

You’ll learn how to improve your active listening skills and take better notes during the interviews.

You’ll learn how to synthesize what you are learning from each interview using interviews snapshots, making it easy for you to act on what you are learning.

You’ll learn how to automate the recruiting process, removing the biggest hurdle to continuous interviewing.

And, most importantly, you’ll get a minimum of 4 hours of deliberate practice (more if you want it) to hone your skill.

The course will start the week of October 23rd and run through the week of November 13th.

The reading and video watching (30 minutes max per week) can happen whenever you have time. The practice sessions will be with 3–5 of your classmates and can be scheduled whenever works best for your group. I’ll do my best to group students by time zone proximity.

This is one of the easiest ways to improve your interviewing skills and start building the keystone habit that will help you accelerate your discovery skills.

If you are interested, you can enroll here. Enrollment closes Friday, October 20th.

The post This Keystone Habit Will Fuel the Rest of Your Continuous Discovery Habits was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This past week I was in London speaking at Mind the Product. As usual, the Mind the Product team hosted a phenomenal event. The following is the script of my talk with slides. When Mind the Product releases the video, I’ll add it to this post.

Teresa Torres presented ‘Critical Thinking for Product Teams’ at Mind the Product London on September 8, 2017.

Hi everyone. I’m excited to be here. I’ve been developing a visual critical thinking tool that I want to share with you today. It’s a bit hard to explain outside the context of a specific product challenge, so I’m going to start by sharing a story that I think will resonate with many of you, and then we’ll get into the tool itself.

Back in 2008, I was a product manager at a startup that operated online communities for university alumni associations. Like many product teams, we had some tough challenges to address.

Whenever we launched a new community, alumni would rush to check out their new site. But with time, that traffic dwindled to a trickle.

While alumni associations (our customers) loved our product, alumni (our end-users) did not. When we launched a new community, we’d get a rush of activity as alumni came to check out their new site. But over time, engagement would dwindle to a predictable trickle.

Our user research told us that alumni loved sending messages to their community: They asked for advice on everything from how to find their next job to what neighborhood to live in in their new city. It was exactly the type of engagement we were hoping for.

Our system allowed people to spam their entire alumni community.

There was only one problem. Nobody wanted to receive these messages. We had alumni in Dallas receiving emails about items sold in Chicago, houses for rent in Boston, and internships in San Francisco.

We knew if we wanted to increase engagement, we needed to reduce the number of unwanted messages people were getting.

We were making it easy for people to spam their entire alumni network. If we wanted to increase alumni engagement, we needed to reduce the number of unwanted messages in our communities.

Now if you are like me, your brain is already starting to think about how to solve this problem. But when I turned to my team and said let’s brainstorm, I got a surprising response. Seth, one of our engineers piped in with, “Let’s integrate Google maps!” Seth wanted to use the Google Maps API to integrate a map that showed where alumni lived around the world.

Seth’s idea seemed irrelevant to the problem we were trying to solve.

I was shocked. This idea came out of nowhere. I was trying to figure out how Google Maps was going to help us address the spam problem. So I asked Seth. And he responded, “Oh, it won’t, but it will drive engagement because it’s cool.” I looked to the rest of the team for help. And sadly (from my point of view), they agreed with Seth. Maps would be cool.

At the time, I didn’t have the words to express my frustration, but intuitively, I knew that building cool stuff wasn’t good enough. Knowing where people lived didn’t feel like a big enough need. And adding a Google map felt like a gimmick.

Now this story isn’t about, “I’m right and Seth is wrong.” We’ll see in a minute, it’s more complex than that. It’s a story about me as a product manager wanting to include my team in deciding what to build, but not knowing how to do so in a productive way.

Today, as a product discovery coach, I see this play out on team after team. We don’t know how to effectively go from a stated outcome like increase engagement to executing on solutions that will drive that outcome.

So I started to deconstruct the problem and here’s what I found.

We Fall in Love with Our Ideas

When we fall in love with our ideas, we don’t pause and reflect. We don’t ask, is this idea any good?

It’s easy for us to generate an idea. We hear about a need, we immediately think of a solution. It’s almost automatic.

And because it feels good to close that loop, we tend to fall in love with that idea.

And when we fall in love with our idea, we don’t think to examine it. We don’t pause and reflect. We don’t ask, “Is this idea any good?”

This is what happened to Seth. He learned about the Google Maps API and got excited. He wanted to try it out. He shared his idea with the rest of our team and they quickly fell in love with the idea, too.

We Don’t Consider Enough Ideas

When we generate more ideas, we generate better ideas.

When we fall in love with our ideas, we don’t consider enough ideas.

My team was so enamored with the Google Maps idea, they wanted to dive in and start creating. They wanted to do something that would drive engagement right now.

And don’t get me wrong, the Google Maps idea may be a good idea. But research on brainstorming shows that when we generate more ideas, we generate better ideas.

When we generate more ideas, we generate better ideas. –

And more importantly, when we consider other ideas, we set ourselves up to make a “compare and contrast” decision rather than a “whether or not” decision.

It’s hard to answer, “Is this idea good or not?” because it treats good as an absolute trait.

A “whether or not” decision is a decision where we ask, “Is this idea good (or not)?” This is a hard question to answer because it treats “good” as an absolute trait.

Instead, we want to ask a “compare and contrast” question, “Which of these ideas looks best?” because it treats good as the relative trait that it is.

Instead, we want to ask a “compare and contrast” question: “Which of these ideas looks best?” This is easier to answer because it treats good as the relative trait that it is.

Is Usain Bolt fast? On the left, it’s hard to tell. On the right, absolutely!

Imagine Usain Bolt running around a track on his own. Is he fast? It’s hard to say. Now imagine him running around a track with other runners. Is he fast? Absolutely. A “compare and contrast” decision makes it easier to evaluate a relative trait.

Ask “compare and contrast” questions, not “whether or not” questions. –

Now for those of you who are thinking you already consider a lot of ideas, you probably do. Many of us have way too many ideas. I’ll come back to this problem in a few minutes.

But first, let’s return to my team’s challenges. Not only did we fall in love with our first idea and therefore didn’t consider enough ideas…

We Don’t Align Around a Target Opportunity

Seth’s Google Maps idea drove me nuts. Not because I thought it was a bad idea, but because I thought it was irrelevant. It didn’t solve the problem I wanted to solve.

… we also didn’t align around a target opportunity (or problem that we were trying to solve).

Seth’s Google Maps idea drove me nuts. Not because I thought it was a bad idea, but because I thought it was irrelevant. It didn’t solve the problem I wanted to solve.

But I didn’t take the time to make sure that my team was aligned around the problem we were solving before we jumped into idea generation. As a result, Seth was thinking about our engagement goal, but he wasn’t thinking about reducing spam, the problem that I was focused on.

Even when teams do align around an opportunity…

We Rarely Consider Enough Opportunities

Both Seth and I came into our brainstorming session fixated on one opportunity.

… we rarely consider enough ideas.

I walked into the brainstorming session assuming that reducing spam was the right opportunity. Seth walked into the session wanting to help people connect with alumni who lived near them. Both of us were only considering one opportunity.

We don’t want to ask, “Is this opportunity worth pursuing?” We want to ask, “Which of these opportunities looks best?”

Just like we want to avoid “whether or not” questions with ideas, the same is true with opportunities. We don’t want to ask, “Is this opportunity worth pursuing?” We want to ask, “Which of these opportunities looks best?” And that requires that we have a set of opportunities to choose from. If we don’t do this, we run the risk of solving unimportant problems.

What we should have done was taken a step back and asked, “What are all the opportunities that might drive alumni engagement?”

Product teams rarely consider enough opportunities before jumping into solutions. –

So how do we prevent these mistakes?

Visualize Your Thinking with an Opportunity Solution Tree

Anders Ericsson summarizes the differences between experts and novices in the book Peak.

I want to introduce you to Anders Ericsson. He wrote the book Peak, which is a summary of his life’s work trying to understand the differences between novices and experts. He argues that experts use more sophisticated mental representations than novices.

He defines a mental representation as follows:

Ericsson argues that experts have more sophisticated mental representations than novices.

“… representations are preexisting patterns of information—facts, images, rules, relationships, and so on—that are held in long-term memory and that can be used to respond quickly and effectively in certain types of situations.”

The key benefit of more sophisticated mental representations is that they help us understand, interpret, organize, and analyze information.

And he argues, “The key benefit of mental representations lies in how they help us deal with information: understanding and interpreting it, holding it in memory, organizing it, analyzing it, and making decisions with it.”

That’s great. Isn’t this exactly what we need? Something that allows us to understand, interpret, organize, and analyze all the information we collect so that we can make better product decisions with it?

Looking back, here’s how I would diagnose my team’s challenges. I came to the brainstorming session with a depth of knowledge about our users. I had just completed an extensive round of user research. Seth came to the brainstorming session with a depth of knowledge about new technology. He had just read about the Google Maps API.

We each came to our brainstorming session with different patterns of information. And we were each relying on our own mental representations to make fast decisions.

The only problem is product teams need to make fast decisions from a shared mental representation of their combined knowledge.

Product teams need to make decisions based on their combined knowledge. –

The opportunity solution tree was my answer to “How might we externalize our own mental representations and align around a shared representation across our teams?”

This challenge is what led to what I call the opportunity solution tree.

Start with a Clear Desired Outcome
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve got a pet peeve to share with you.

If you’ve been following along with the growth of the Lean Startup and other experimental methods, you’ve probably come across this hypothesis format:

  • We believe [this capability]
  • Will result in [this outcome]
  • We will have confidence to proceed when [we see these measurable signals]

If you aren’t familiar with this format, you can learn more about it here.

While this format is fast and easy to use, it isn’t enough to ensure that your experiment designs are sound. As a result, I often cringe when I hear that teams want to use it.

After talking with Barry O’Reilly, the creator of this format, I realized that I often conflate a hypothesis format with experiment design. This is a fair criticism, so, let’s get clear on the difference.

Google tells me that a hypothesis is defined as follows:

a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.

Product teams that adopt an experimental mindset start with hypotheses rather than assuming their beliefs are facts.

Experiment design, on the other hand, is the plan that a product team puts in place to test a specific hypothesis.

I like that the “We believe…” hypothesis format is simple enough that it encourages teams to commit their beliefs to paper and encourages them to treat their beliefs as suppositions rather than as assumed facts.

My concerns are that the format encourages teams to test the wrong things and it doesn’t require that teams get specific enough to lead to sound experiment design.

Test Specific Assumptions, Not Ideas

The “We believe…” format does encourage teams to think about outcomes and particularly how to measure them. This is good. Too many teams still think producing features is adequate.

But I don’t like that this format starts with a statement about a capability. This keeps us fixated on our ideas, whereas we are better off identifying our key assumptions.

When we test an idea, we get stuck asking, “Will this feature work or not?” The best way to answer that question is to build it and test it. However, this requires that we spend the time to build the feature before we learn whether or not it will work.

Additionally, this is a “whether or not” question. Chip and Dan Heath remind us in Decisive that “whether or not” questions lead to too narrow of a framing. When we consider a series of “whether or not” questions—should we build feature A, should we build feature B, and so on—we forget to account for opportunity cost.

Instead, we should frame our questions as “compare and contrast” decisions: “Which of these ideas look most promising?” We should design our experiments to answer this broader question. The best way to do that is to test the assumptions that need to be true for each idea to work.

Because ideas often share assumptions, this allows us to experiment quickly, ruling out sets of ideas when we find faulty assumptions. Additionally, as we build support for key assumptions, we can use those assumptions as building blocks to generate new ideas.

Assumption testing is a faster path to success than idea testing. –

As soon as we shift our focus to testing assumptions, the “We believe…” format falls apart. It’s rare that an outcome is dependent upon a single assumption, so the second and third parts of the hypothesis don’t hold up.

A more accurate format might be:

  • We believe that [Assumption A] and [Assumption B]… and [Assumption Z] are true
  • Therefore we believe [this capability]
  • Will result in [this outcome]
  • We will have confidence to proceed when [we see a measurable signal]

But again, it’s not the idea you should be testing. You should be testing each of the assumptions that need to be true for your idea to work. So we need a hypothesis format that works for each assumption.

Test each of the assumptions that need to be true in order for your idea to work. –

Let’s look at an example. Imagine you are working at Facebook before they added the additional reaction options (e.g. love, sad, haha, sad, wow, angry). I suspect Facebook was inundated with “dislike button” requests, as I heard this complaint often.

Imagine you started with this modified hypothesis:

  • We believe that
    • Assumption A: People either like or dislike a story.
    • Assumption B: People don’t want to click like on a story they dislike.
    • Assumption C: Some people who dislike a story would engage with the story if it was easier to do than having to write a comment.
  • Therefore we believe that adding a dislike button
  • Will result in more engagement on newsfeed stories
  • We will have confidence to proceed when we see a 10% increase in newsfeed clicks.

Now imagine you do what most teams do and you test your new capability. You add a dislike button and you see a 5% increase in newsfeed clicks.

You didn’t see the engagement you expected, but you aren’t sure why. Is one of your assumptions false? Are they all true and they just didn’t have the effect size you expected? You have to do more research to answer these questions.

Now imagine you tested each of your assumptions individually. To test Assumption A, you could have a Facebook user review the stories in their newsfeed and share out loud their emotional reaction to each story. You’d uncover pretty quickly that Assumption A is not true. People have many different emotional responses to newsfeed stories.

Now I’m not saying that you wouldn’t have figured this out by running your capability test. You could easily run the same think-aloud study as we did in the assumption test after you built the dislike button. However, you will have learned what won’t work after you have already built the wrong capability.

The advantage of testing the individual assumptions is that you avoid building the wrong capability in the first place.

I don’t like that the “We believe…” format encourages us to test capabilities and not assumptions. However, this isn’t my only concern with the “We believe…” format.

Align Around Your Experiment Design Before You Run Your Experiment

Have you ever run an experiment only to have key stakeholders or other team members argue with the results? I see it all the time.

You run an A/B test. Your variable loses to your control and the designer argues you tested with the wrong audience. Or an engineer argues that it will perform better once it’s optimized. Or the marketing team argues that even though it lost, it’s better for the brand. And so on.

We’ve all been there.

Here’s the thing. If you are going to ignore your experiment results, you might as well skip the experiment in the first place.

If you are going to ignore your results, you might as well skip the experiment. –

Now that doesn’t mean that these objections to the experiment design aren’t valid. They may be. But if the objections arise after the experiment was run, it’s difficult to separate valid concerns from confirmation bias.

Remember, as we invest in our ideas, we fall in love with them (we all do this, it’s a human bias called the escalation of commitment). The more committed we are to an idea, the more likely we are to only see the data that confirms the idea rather than the disconfirming data, no matter how much disconfirming data there is. This is another human bias, commonly known as confirmation bias.

Our goal should be to surface these objections before we run the experiment. That way we can modify the design of our experiment to account for them.

Let’s return to our hypothesis:

  • We believe that adding a dislike button
  • Will result in more engagement on Facebook
  • We will have confidence to proceed when we see clicks on news feed stories increase by 10%

This looks like a good hypothesis. It includes a clear outcome (i.e. more engagement) and it defines a clear threshold for a specific metric (i.e. 10% increase of clicks on news feed stories).

Remember, we tested this capability and we got a 5% increase in engagement, not a 10% increase. If we trust our experiment design, we need to conclude based on our data that our hypothesis is false as we didn’t clear our threshold.

But for most teams, this is not how they would interpret the results.

If you like the change, you’ll argue:

  • We didn’t run the test for long enough.
  • People didn’t have enough time to learn that the dislike button exists.
  • The design was bad. People couldn’t find the dislike button.
  • People hate all change for the first little while.
  • Maybe the percentage we tested with are all optimists, liking everything.
  • The news cycle that day was overly positive and skewed the results.
  • 5% is pretty good, we can optimize our way to 10%.
  • Any increase is good, let’s release it.

If you don’t like the change, you’ll argue:

  • It didn’t work. We didn’t get to 10%.
  • People don’t want to dislike things.
  • Facebook is a happy place where people want to like things.
  • Any increase is not good, because more options detract from the UI. We need to only add things that move the needle a lot.

And where do you end up? Exactly where you were before you ran the experiment—with a team who still can’t agree on what to do next.

Now this confusion isn’t necessarily because we didn’t frame our hypothesis well. It’s because we didn’t get alignment from our team on a sound experiment design before we ran the experiment. If everyone agreed that the experiment design was sound, we’d have no choice but to conclude based on our data that our hypothesis was false.

Now this isn’t a problem with the “We believe…” format per se, but I see many teams conflate a good hypothesis with a good experiment design, just like I did. They believe they have a sound hypothesis and therefore they conclude their experiment design is sound as well. However, this is not necessarily true.

Invest the Time to Get Your Experiment Design Right

To ensure that your team won’t argue with your experimental results, take the time to define and get alignment around the following elements:

  1. The Assumption: Be explicit about the assumption you are testing. Be specific.
  2. Experiment Design: Describe the experiment stimulus and/or the data you plan to collect.
  3. Participants: Define who is participating in the experiment. Be specific. All customers? Specific types of customers? And be sure to include how many.
  4. Key Metrics and Thresholds: Be explicit about how you will evaluate the assumption. Define which metric(s) you will use and any relevant thresholds. For example, “increase engagement” is not specific enough. How do you measure engagement? “Increase clicks on newsfeed stories by 10%” is more specific and sets a clear threshold. For some types of metrics, it is also important to define when you will take the measurement. For example, if you are measuring open rates on an email, you’ll need to define how long you’ll give people to open the email (e.g. 3 days after it was sent).
  5. Have a clear rationale for why your experiment design/data collected will impact your metric. Don’t over test. Be sure to have a strong theory for why you think this metric will move. Many teams get too enthusiastic about testing and test every variation without any rhyme or reason. Changing the button color from blue to red increased conversions so now they want to try green, purple, yellow, and orange. However, doing this will increase your chance of false positives and lead to many wasted experiments.
  6. Decide upfront how you will act on the data you collect. Before you run your experiment, define what you will do if your assumption is supported, if it’s refuted, or in the case of a split test, if the results are flat. If the answer is the same in all three instances, skip your experiment and take action now. If you don’t know how you will use the data, you aren’t ready to run your experiment.

This list is more complex than the “We believe…” format and I don’t expect it to spread like wildfire. However, if you want to get more out of your experiments (and you want to build more trust amongst your team in your experimental results), defining (and getting alignment around) these elements upfront will help.

I’ll be sharing a one-page experiment design template with my mailing list members. If you want to snag a copy, sign up here.

Subscribers receive:

  • A monthly article or video on product discovery/continuous innovation.
  • A monthly newsletter with book recommendations and worthy reads from around the web.

A special thanks to Barry O’Reilly who read an early draft of this blog post. His feedback led to significant revisions that made this article better. Barry’s a thoughtful product leader. Be sure to check out his blog.

The post How to Improve Your Experiment Design (And Build Trust in Your Product Experiments) was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s easy to think you already do continuous product discovery.

Most of the teams that I work with come into coaching thinking that they don’t need help. They’ve read the industry books, they attend the popular product conferences, and they follow all the leading blogs. They’ve got this.

And they aren’t wrong.

Most product teams are starting to integrate discovery practices into their product development process. They interview customers, run usability tests, and conduct A/B tests. What more is there to learn?

The problem is most teams don’t do any of these activities often enough. Nor do they use these activities to inform their product decisions. Instead, they use them to validate the decisions they’ve already made.

Last May, I spoke at the Front conference in Salt Lake City. In my talk, I shared a clear definition of continuous product discovery that I hope will act as a benchmark to help teams evaluate their own product discovery practices. I also shared a case study telling the story of two product teams as they adopted my definition of continuous product discovery.

You can watch the video below. (Can’t see the video? Click here.)

Adopting Continuous Product Discovery Practices by Teresa Torres at Front Salt Lake City 2017 - YouTube

 Show Notes:
  • [1:03] Continuous Product Discovery: A Definition
  • [3:28] Arity: A Case Study of an Organization Adopting Continuous Product
  • [9:16]  Continuous Customer Interviews
  • [20:12] Continuous Rapid Prototyping
  • [26:55] Continuous Product Experiments
  • [34:35] The Value of Continuous Discovery
Resources Mentioned: Tools: People: Articles: Conferences: Companies: Full Transcript

[Edited for brevity and clarity.]

My talk at Front 2017

Today I want to share with you a story about a specific company that’s in the process of adopting continuous product discovery across the entire company. What I found working with companies that are trying to do this is that, as individual product managers, as individual designers, we can start to think about how we want to work, but if our organizations don’t support it, it can be hard.

The best way to define discovery is relative to delivery. Discovery is what we do to decide what to build, whereas delivery is what we do to build it.

For those of you that aren’t familiar with these terms, I’d like to start at the beginning. Continuous discovery is the easiest to describe in the context of delivery. I look at discovery as the set of activities that we do as product teams to decide what to build, whereas delivery is the set of activities that we do as product teams to actually build and ship our products.

I want to be clear, when I say product teams, I mean the product manager, the designers, and the software engineers. I don’t get caught up on who does what. We’re a team. We’re responsible for discovering value and delivering it. This talk is going to apply to the team.

Continuous discovery teams conduct their own research weekly, in pursuit of a desired outcome.

Continuous discovery is more specific. By continuous discovery I mean weekly touch points with customers by the team building the product, where they themselves conduct small research activities in pursuit of a desired outcome.

Each line of this definition is important to me. Many of us work in companies where a customer support team talks to customers, or a centralized user research team talks to customers. But our product managers and our designers and our software engineers learn about our customers through research reports or through personas, or through customer journey maps.
It’s important for those of us that are building the product to have regular interaction with our customers.

Continuous discovery requires weekly touch points with customers by the team building the product. –

We aren’t full-time researchers so we can’t do big box, big project research—we have to do it in small bite-sized activities. So it fits along with all the other stuff that we do in our jobs, like writing code and designing. We’re going to talk a lot about what those small research activities look like today.

And then finally, it is really important that we focus on outcomes. Our goal is not just to ship a product, but it’s to ship a product that has an impact on our customers in a way that creates value for our business.

We can’t just do open-ended research indefinitely; our goal should be to identify the next question that we need answered that’s going to help us get to our next product outcome.

Continuous discovery requires weekly research activities that drive the pursuit of a desired product outcome. –

This case study will share how one company is adopting continuous discovery across the enterprise.

I’m here to tell you a story about a specific company. Let’s get back to that tale. I’m going to start with some context.

With Arity, there’s something for everyone to relate to.

This is a story of a company called Arity, they’re based in downtown Chicago. Arity has 380 employees. They’re a fairly small company. But they’re actually a subsidiary of a much larger company. They are in the Allstate family. Allstate is a very large company. They have 40,000 plus employees at Allstate.

What this means is that this company, Arity, sometimes acts like a startup because they’re a small company that’s being protected to go and do their own thing. But they sometimes act like a big company because they grew up in the Allstate culture.

Allstate isn’t just a big company; they’re not a traditional tech company. Most people that work there don’t know how digital products work.

They’re also a regulated industry and if you’ve never worked at a bank or an insurance company or anywhere else that’s regulated, what this means is the company culture is set up to be safe. It’s set up to follow these regulations. This can lead to a lot of sign-offs, a lot of red tape—things move slowly.

This company builds both consumer and enterprise products.

I’ve been in your shoes sitting at a conference, listening to a speaker tell a great story. It’s easy to think, “That would never work in my company.” It’s really easy to say, “Yes, you guys can do that, but my place is different.”

What I like about Arity is that there’s something here for everybody, whether you’re consumer, whether you are enterprise, whether you are small, whether you’re big, whether you’re regulated, whether your company’s on board or not. There’s a lot to learn from this company’s journey, so let’s get into it.

Meet Frank and Gina

I want to introduce you to Frank and Gina. Frank and Gina are real people. They are two product managers that work at Arity. They represent two of the nine product teams that I’ve worked with at Arity over the last year. Let’s learn a little bit about their products and their journey.

Meet Frank’s team.

Frank’s team is working on a consumer app. Allstate has data about how you drive, where you go, your relationship with your car, and they looked at this and they said, “We think we can provide a compelling service to consumers to help them with all the challenges around owning a car.” Their mission is that broad. Find something that you can do that’s of service for consumers with their relationship to their vehicle. This team’s goal is to engage consumers through some sort of app or service.

Frank’s team, when I met with them for the first time, were excited about this idea of continuous discovery. They want to build something that customers care about.

There was just one problem. They had never talked to a customer before. All very smart people, all very capable, but they grew up, career-wise, in a company that didn’t have a strong history of discovery. So we were starting from square one.

I want to emphasize that because for those of you that don’t have a regular cadence of interacting with your customers, both of the teams that we’re going to talk about today were starting from scratch.

Frank’s team, in the early days, was a product manager, Frank, a second product manager, Luis, and a tech lead. They were supported by a centralized user research team, a centralized design team, and a centralized market insights team. All three of those teams sat in Allstate, not at Arity. So there was a little bit of this divide between the product team building the product and all the design and research resources they were supported by. This is going to be a part of our tale that we’ll come back to.

Meet Gina’s team.

Gina, on the other hand, was working on a business intelligence product. Think about it as a scoring algorithm. I can’t share specific details, but it’s a scoring algorithm for insurance companies to help them assess risk.

She is working on an enterprise B2B product. It’s not a product with a graphical user interface (GUI). Not an app, not a website, but really a data product. It’s a scoring function that’s helping these companies assess risk. Think about it like a credit score. A credit score helps you assess risk when you’re applying for a credit card, or buying a home.

Like Frank’s team, Gina’s team was also sold on this idea of continuous discovery. They had talked to customers, but only in a sales context. Their goal was to find their first referenceable customer for their scoring product.

Gina had a tech lead and a data analyst on her team, and because she was working in the insurance industry. she was supported by both the president of Arity who has many industry connections and by Allstate and everything they know about the insurance business.

Our goal was to get Frank and Gina operating as a continuous discovery team.

Remember, our goal is to get Frank and Gina to have weekly touch points with their customers so that they themselves can conduct small research activities, and learn very quickly in pursuit of their desired outcome.

Our goal was to get the teams to a regular cadence of these three activities.

The first thing is, we need to get them doing small research activities on a regular basis. So we started by looking at this set of activities. How can they do regular customer interviews, rapid prototyping, and product experiments? They were doing none of this when we started, so we started with customer interviews. How can we get them learning about their customers?

Continuous discovery teams have a regular cadence of customer interviews, rapid prototyping, and experimentation. –

Let’s look at how Frank’s team integrated customer interviews into their workflow.

If you remember, Frank’s team had never talked to a customer before. They were supported by a centralized user research team. So they did what most people would do and they reached out to that team and they asked for help. They said, “Hey, we’re working with Teresa. She wants us to talk to a customer every week. Help, how do we do that?” They ran into a couple of problems.

The research team had a project mindset rather than a continuous mindset when it came to customer interviews.

Their central research team wanted to schedule six interviews for three weeks out. Now, let’s think about this from their perspective. They’re a centralized research team, they support dozens of product teams across both Arity and Allstate. They’ve got a pipeline of projects they have to get through. So when a team comes to them and says, “I need to talk to customers,” they have to get in line.

They also work from this big project mindset. You can’t just talk to one customer; you’ve got to talk to 6–12 customers. That’s research.

I wanted Frank’s team to adopt a continuous mindset and engage with customers every week.

Well, this team got stuck because they came and told me this and I said, “This isn’t how it works. I want you to talk to one customer this week. I don’t care how you do it, just find somebody to talk to.”

Frank’s team was concerned. They responded, “We can talk to people, but are we allowed to? We don’t want to step on the user research team’s toes. We know they can help us, why can’t we just wait?” I said, “Because you don’t make product decisions every three weeks. You make product decisions every day. You need to move your product forward every day.

If we want to co-create with our customers, we need to make sure that the teams building the product are interacting with customers on a regular basis.

The good news is I said, “Look, let your user research team schedule those six interviews three weeks out, in the meantime let’s just start talking to people. You’re working on a consumer product, get outside, talk to people. Let’s see what we can learn.”

“Research in the Wild” is a fast way to get quick answers to small research questions.

I introduced them to this idea that I call “research in the wild.” When we think about research, we think about formal hour-long discussions with long discussion guides where somebody has vetted every question. It’s research, big research.

I said, “Look, you’ve never talked to a customer. You’re trying to build a product for people about how they interact with their cars. Let’s just go talk to people and see what they want and see what they think.”

Now here’s the danger in..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In April, we posted this list of conferences. Since then, I’ve been asked dozens of times what my favorite product conference is.

There are a lot of great product events, but for my money, none are better than Mind the Product. It’s the only product conference where I feel confident that the content will push my thinking and that there will be a number of great parties. That’s a fun combination.

Here are my highlights from Mind the Product SF:

1. My workshop attendees were fantastic!

Students in the Continuous Discovery Habits workshop learning how to create rapid prototypes.

I had 25 people in my Continuous Product Discovery Habits workshop. They learned how to collect stories in interviews, do rapid prototyping (they built several in the workshop), and because one of the participants asked for it (!), we spent some time learning about my opportunity solution tree. The feedback was great and I left encouraged to highlight the opportunity solution tree in my upcoming Mind the Product London talk.

2. Aparna highlights that machine learning will allow computers to adapt to humans.

A simple insight that should drive innovation in human computer interfaces.

I love the way Aparna Chennapragada, Product Director at Google, framed one of the benefits of machine learning as computers being able to adapt to humans rather than humans having to adapt to computers. This is such a simple insight and it’s what many of us aim to do when we develop products, but I lit up when I saw her slide as I started to imagine a world where this was really true (not just “kind of sort of, okay not really” true). That’s awesome.

“With machine learning, computers will adapt to humans, rather than humans having to adapt to computers.” –

3. Nate encourages us to focus on knowledge transfer.

Nate encourages us to focus on knowledge transfer.

I love that Nate Walkingshaw, CXO at Pluralsight, deliberately focuses on knowledge transfer across his team. So many teams underestimate this. I believe deep knowledge about our customers is the best competitive advantage we have and it’s critical to do the work to share this knowledge across our teams.

“Customer knowledge is a competitive advantage. How are you sharing it across your teams?” –

4. Josh tells us how to detect if our users are really using our products.

Josh provides simple metrics for understanding the behavior of your core users.

Josh takes a systems view to activating core users.

Josh Elman, Partner at Greylock, excels at presenting simple metrics models that get at the truth. Be sure to watch this video when it comes out. In the meantime, scrutinize the two slides above.

“Are your users really using your product? Track core users to find out.” –

5. Janice shows why intrapreneurship is legitimately hard AND we can do better.

Janice advises us on how we as product leaders can encourage intrapreneurship in our organizations.

This was my favorite talk of the day. Janice Fraser, SVP at Bionic Solutions, talked about intrapreneurship in big companies, what works, what doesn’t, and most importantly what each of us can do to help move the organization forward regardless of where we sit in the organization. The slide above is her advice to leaders on how to manage teams in an uncertain world. I’ll be sending this video to all of my clients as soon as it is available.

“Reward learning, not certainty. Ask, don’t tell.” –

6. Janna argues your roadmap is a prototype for your strategy.

Janna drops a brilliant insight.

I hate that roadmaps typically present a certain view of the future. Repeat after me: There is nothing certain about the future. Nothing. So I loved this framing of roadmaps as strategy prototypes by Janna Bastow, CEO & Co-Founder of ProdPad.

“Your roadmap is a prototype for your strategy.” –

7. Caitlin Kalinowski shares a thing or two about prototyping.

Caitlin visualizes the experience of getting there vs. getting stuck when prototyping.

Caitlin shares a helpful list of prototyping tips.

Speaking of prototypes, Caitlin Kalinowski, Head of Product Design Engineering at Oculus, quickly dropped a lot of prototyping knowledge. I love her visual depiction of the decision to keep investing vs. resetting (the first slide) and she walked through a great list of prototyping tips (second slide).

“Solve the hardest problem first. Iterate like crazy.” –

And finally, I’ll note that I absolutely loved that the vast majority of the day was about product discovery. Don’t get me wrong, delivery matters. Without it, we’d have no products. But for most of us, the hard problems we face are in discovery and we’ve neglected this area for far too long.

The post 7 Standout Moments From Mind the Product SF 2017 was written by Teresa Torres and appeared first on Product Talk.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview