Insights from the CDH Benchmark Survey: How Are Teams Adopting Discovery Habits?

“Product thought leaders talk about an ideal way of working. Nobody actually works that way.”

I can’t tell you how many times I hear this sentiment on Twitter and LinkedIn. And I hate it.

I realize that many product people have never worked in a product trio, don’t have access to customers, aren’t given time to test their ideas, and are working in what Marty Cagan calls “features teams” or “delivery teams.”

And just the same, many people do work in product trios, interview customers, test their ideas, and work on empowered product teams.

Both are true. The reason why it’s so hard for the first group to believe that the second group exists is because they’ve never seen it.

But just because you’ve never seen it or experienced it, that doesn’t mean it doesn’t exist. Especially when your experience only spans a half a dozen or so different workplaces.

To help highlight that real teams do work this way, we ran our inaugural CDH Benchmark Survey this past fall, where we asked teams about their discovery habits.

This is the second post about this survey. If you missed the first one on product success and team satisfaction, be sure to start there. It covers who we talked to, how we found them, and explained the goals of the survey. This post will pick up where it left off.

Shifting From Outputs to Outcomes

Good discovery starts with a clear outcome. Your outcome sets the scope for your discovery. It keeps us focused and ensures that we create value for the business while meeting customer needs.

We asked teams about their habits related to setting outcomes, starting with, “Are you being asked to deliver outputs or outcomes?”

Good discovery starts with a clear outcome. Your outcome sets the scope for your discovery. In the #CDHBenchmarkSurvey, we asked teams, ‘Are you being asked to deliver outputs or outcomes?’ – Tweet This

Before answering this question, respondents saw this description of outcomes vs. outputs:

Outcomes vs. Outputs

An outcome is a metric that measures the impact of your work. It’s typically derived from your business model (e.g. business outcomes) or represents a behavior your customer does in your product (e.g. product outcomes).

Outputs are the things that we produce—for product teams this usually means features, initiatives, or programs.

OKRs are a popular way of expressing an outcome.

A chart visualizing results to the question, "Is your team currently being asked to deliver outcomes or outputs?"

The majority of respondents said their teams are asked to deliver a mix of outcomes and outputs.

I was not surprised to see the most common response (48.3%) was: “A mix of both—we have metrics that we are trying to impact and we get asked to deliver specific features.” Most of us are still learning to start with outcomes and change often happens slower than we want.

I was thrilled to see that 20.8% selected: “Outcomes—we are asked to drive metrics, not to deliver specific features.”

For the 30+% who selected “Outputs—we are asked to deliver specific features” or “I’m not sure,” don’t be discouraged. You can start to build your own outcome mindset before your organization catches up. Doing so will help you build better products regardless of your organizational context. If you want help building your outcome mindset, check out our Defining Outcomes course.

Even if your organization isn’t outcome-focused, you can start to build your own outcome mindset before your organization catches up. Doing so will help you build better products regardless of your organizational context. – Tweet This

For the teams who selected “Outcomes” or “A mix of both,” we then asked, “How many outcomes is your team asked to work on this quarter?”

A chart visualizing answers to the question, "How many outcomes is your team asked to work on this quarter?"

When asked how many outcomes their teams are working on, the majority of respondents said three or more. Ideally, this number should be much lower.

The results for this question are interesting. I know from working with many teams that most still have several outcomes. We are collectively trying to do too much at once.

I firmly believe that when teams work on one outcome at a time, they have a greater overall impact. When teams are split across multiple outcomes, they get pulled in too many directions and make incremental progress on each at best. I’d rather see outsized progress on one.

Only 28.8% of #CDHBenchmarkSurvey respondents said they were working on one outcome. I firmly believe that when teams work on one outcome at a time, they have a greater overall impact. – Tweet This

It’s clear from these results that focusing on a single outcome is still a minority position. However, I hope to make the case for this in my next blog post about this survey when I connect the different habits to product success.

In hindsight, for teams who are tasked with several outcomes, I should have asked if they had individual outcomes or if their team shared responsibility for several outcomes. I’ll be sure to add that to next year’s survey.

The survey explained that outcomes tend to have a directional component (e.g. increase engagement) and a target (e.g. by 10%). We wanted to understand how long a team had been working on the same directional outcome even if the target changed quarter over quarter.

Generally, I want to see a team focused on the same directional outcome over time. It takes time to learn how to impact a metric and the longer a team spends on an outcome, the more impact they’ll have. If you do a good job of deriving your outcomes from your business model, they shouldn’t need to change that often.

This doesn’t, however, mean that the way that you measure an outcome will stay constant. For example, a team might spend a year focused on increasing engagement. When they first get started, they might start by measuring daily active users. As they learn more, they might evolve to measuring DAU/MAU (daily active users divided by monthly active users). And with more time, they might define and refine specific high-value activities that count as usage. As we learn more about what makes our customers successful, we should develop better ways of measuring that success.

A chart visualizing answers to the question, "How long have you been working on the same directional outcome?"

When I asked how long teams had been working on the same directional outcome, responses were spread pretty evenly across the options.

We asked, “How long have you been working on the same directional outcome?” and the responses were spread pretty evenly across the options.

  • Less than a quarter (20.4%)
  • One quarter (22%)
  • Two quarters (26.7%)
  • Three quarters (14.7%)
  • Four or more quarters (16.1%)

In my experience, teams make more progress in their second quarter working on an outcome than they do in the first. That’s because they incur a big learning tax in the first quarter that they are working on an outcome. I was thrilled to see that 57.5% of respondents were working on the same directional outcome for at least 2 quarters. I hope this trend continues.

We also asked respondents who in their organization was involved in setting their outcomes. Respondents were able to select multiple answers.

A chart visualizing the answers to the question, "Who was involved in choosing your team's outcome?"

I asked who was involved in choosing outcomes and was surprised to see that such a large percentage of teams said both their team and leadership were involved.

60.4% of respondents said that their Head of Product (e.g. CPO, VP of Product) was involved. 51.7% said their team was involved. 49.8% said an executive other than their head of product was involved, and 6.3% said other.

In an ideal world, outcomes should be set by both the product team and executive leadership. Participants were able to select multiple items and here’s how the combinations broke down:

  • Just our product team (14.47%)
  • Just executives (44.54%)
  • The team plus any executive leadership (36.38%).

In my experience coaching teams, it seems like either the team picks their outcome (with little to no input from leadership) or leadership picks the outcome (with little to no input from the team). So I was surprised to see that 36.38% of respondents said that both their team and leadership were involved.

The “Other” responses were a reminder that people don’t have great reading comprehension on the web. The most common responses were “CEO” and “Head of Product” (which were both options they could have selected). Other common responses included the product manager, the sales team, the CTO (again, one of the options), and Directors of Product or Engineering.

Understanding Team Exposure to Customers

I’ve long said that the keystone habit to discovery is regular engagement with customers. When we connect with customers on a regular basis, we get exposed to the gap between how we think about our products and how our customers think about them. When teams see this gap, they tend to be motivated to overcome it. They tend to interview more and run more assumption tests.

So naturally, we asked, “When was the last time you talked to a customer?”

A chart visualizing the answers to the question, "When was the last time you talked to a customer?"

I asked participants when they last spoke to a customer and I was happy to see 45.3% said in the past week.

I was happy to see that 45.3% of respondents said “In the past week.” 26.3% said “In the past month,” 11.5% said “In the past quarter,” 9.1% said “More than a quarter ago,” and 7.8% said “Never.”

45.3% of #CDHBenchmarkSurvey respondents said they have talked to a customer in the past week. 26.3% said they talked to a customer in the past month. – Tweet This

But before we get too excited about these results, I know that for many of these respondents, “talking to a customer” often means getting pulled into a sales call or being consulted on a support ticket. While these are important activities, they aren’t a replacement for good customer interviews.

So we also asked, “When was the last time you conducted a story-based interview?” Now I know that story-based interviewing isn’t the only way to interview customers. But I do believe it is the most reliable way to learn about our customers (aside from observing them).

We didn’t assume that people knew what story-based interviewing was, so we showed them the following description before asking:

A story-based interview is where the interviewer keeps the participant grounded in a specific story about their past behavior.

For example: “Tell me about the last time you watched Netflix.”

Not: “Tell me about your experience on Netflix.”

Not: “What do you like to watch on Netflix?”

Not: “Would you watch this on Netflix?”

A chart visualizing the answers to the question, "When was the last time you conducted a story-based interview?"

I’m thrilled to see so many respondents say they conducted a story-based interview last week. Yes!

I was pretty happy with the results:

  • In the past week (16.3%)
  • In the past month (22%)
  • In the past quarter (14.6%)
  • More than a quarter ago (15.5%)
  • Never (31.5%)

It wasn’t too long ago that I was regularly criticized for being too idealistic when I suggested teams talk to customers every week. I’m thrilled to see that 293 of our respondents said they conducted a story-based interview last week. Yes! I also know that it takes time to get to this cadence, so I was equally thrilled to see that an additional 395 had conducted a story-based interview in the last month.

While these results do indicate we have more work to do—with 47% respondents having either never conducted a story-based interview or having done so more than a quarter ago—they also represent tremendous progress.

This question asks about one moment in time—last week. I also wanted to know who was building a habit of weekly interviewing. So for people who said they interviewed last week, we also asked, “Looking back over the past few weeks, how many consecutive weeks have you conducted at least one story-based interview each week?”

A chart visualizing the answers to the question, "Looking back over the past few weeks, how many consecutive weeks have you conducted at least one story-based interview each week?"

It was exciting to see that 28.6% of respondents had been interviewing weekly for at least four weeks in a row.

And here were their responses:

  • Zero (11.2%)
  • One (20.1%)
  • Two (24.5%)
  • Three (15.6%)
  • Four or more (28.6%)

I was thrilled to see that 28.6% of respondents who said they interviewed last week had a 4 or more week streak going. That’s a sign of a strong habit.

28.6% of #CDHBenchmarkSurvey respondents who said they interviewed last week indicated that they had a 4+ week interviewing streak. Yes! That’s a sign of a strong habit. – Tweet This

I was not surprised to see that most folks are still struggling to build a habit. This is still a new idea for most teams and it takes time to turn this into a sustainable habit.

I was confused by the high percentage of people (11.2%) who said zero. They only saw this question if they said they had interviewed in the previous week, so at a minimum they should have chosen 1.

Synthesizing What You Learn from Interviews

While there is some benefit from mere exposure to customers, the real value comes from synthesizing what we are learning in our interviews. To understand teams’ habits in this area, we asked a series of questions. We started with, “Did you identify any opportunities in your last interview?”

We didn’t assume that respondents would know what we meant by “opportunity,” so we defined it as follows before they saw the question:

An opportunity is an unmet customer need, pain point, or desire. It’s not a solution or feature request.
A chart visualizing the answers to the question, "Did you identify any opportunities in your last interview?"

I was surprised to see that 77.6% of respondents said they’d identified opportunities in their last interview because I find most teams aren’t clear on the definition of “opportunity.”

I was surprised to see that 77.6% of respondents said yes. In my experience coaching teams, this is not an automatic next step. And I find that most teams aren’t clear on what counts as an opportunity and what doesn’t. I realize now that I also should have asked those who said yes to share an opportunity from their last interview, so I could better assess if we mean the same thing by “opportunity.” I’ll add that to next year’s survey.

We also asked, “Did you create an interview snapshot for your last interview?” Interview snapshots are a way to visually synthesize what you are learning from each interview. They are by no means required (there are other techniques for synthesizing what you learned), but it’s the one that I recommend, so I wanted to measure how many teams were adopting them.

A chart visualizing the answers to the question, "Did you create an interview snapshot for your last interview?"

I was thrilled to see that 34% of respondents created an interview snapshot from their last interview—it’s always nice to see how something you created has an impact on teams.

Creating interview snapshots regularly takes discipline and I know many good continuous discovery teams who don’t bother. So I was thrilled to see that 34% of respondents said yes. This one was rewarding for me. It’s always nice to see how something you created has an impact on teams.

34% of #CDHBenchmarkSurvey respondents who conducted an interview said they created an interview snapshot afterwards. I love seeing teams adopt this practice. – Tweet This

Product Talk readers know that I recommend product trios interview together. To assess how teams are doing, for the teams who earlier indicated that they worked in a product trio, we asked, “Did your entire product trio participate in your last customer interview?”

A chart visualizing the answers to the question, "Did your entire product trio participate in your last customer interview?"

I was disappointed to see that the majority of respondents said their entire trio hadn’t participated in their last customer interview.

While the responses here didn’t surprise me, they did disappoint me. I really had hoped we had made more progress on this front. Only 16.8% of respondents said yes, whereas 83.2% said no.

For the folks who said no, we asked, “Who from your product trio didn’t participate in your last interview? (select all that apply)”

A chart visualizing the answers to the question, "Who from your product trio didn't participate in your last interview?"

For the majority of respondents, their engineer was missing from their last customer interview.

I was not surprised to see that 81.3% of the respondents said that engineering was missing. Far too many companies still think the only value engineers offer is writing code. I would love to see us make much more progress on this front.

81.3% of #CDHBenchmarkSurvey respondents (who did not interview as an entire product trio) said their engineers were missing from the interview. Let’s get more engineers involved in discovery! – Tweet This

41.4% of respondents said the designer was missing and 19% of respondents said the product manager was missing. I suspect these results were heavily influenced by the fact that an overwhelming majority of our respondents were from the product management function (69.43%).

These results indicate that we still have work to do when it comes to collaborating as a trio. I’m a strong proponent of trios because they ensure you bring different perspectives into your interviewing and decision-making.

Visualizing Your Thinking: Experience Maps, Opportunity Solution Trees, and Story Maps

One of the key themes in my book Continuous Discovery Habits is for teams to visualize their thinking. The book includes a number of visuals designed to help teams align and examine their thinking including opportunity solution trees, experience maps, story maps, and much more.

We wanted to better understand who was adopting which visuals and how they were integrated into their day-to-day work.

We started by asking, “Is your team currently using any of the following visuals? (check all that apply)”

A chart visualizing the answers to the question, "Is your team currently using any of the following visuals?"

Respondents tend to use experience maps and customer journey maps—a result that’s not too surprising since these are both broad categories and well-known techniques.

Experience maps and customer journey maps were the most commonly used visuals (46.9% of respondents), followed by user story maps (39.6% of respondents), opportunity solution trees (32.3% of respondents), jobs to be done maps (18.5% of respondents), affinity maps/diagrams (15.3% of respondents), impact maps (6.2% of respondents), and Wardley maps (0.8% of respondents). 23.9% said they weren’t using any of these visuals and 3% said they were using some other visuals.

Customer journey maps and experience maps were the most commonly reported visual used (46.9% of #CDHBenchmarkSurvey respondents), followed by user story maps (39.6%), and opportunity solution trees (32.3%). – Tweet This

I’m not surprised that experience mapping and customer journey mapping came out on top. These are broad categories and are well-known techniques.

I was thrilled to see user story maps and opportunity solution trees not too far behind.

I was surprised to see that 23.9% of respondents aren’t using any visuals. I have a hard time imagining good product trio collaboration without them. I’ll be looking at the relationships between the different variables in a different post and will be excited to see how the use of visuals impacts team collaboration and team satisfaction.

I’ve met a lot of teams who are good at creating an experience map or customer journey map one time, but then never revisit it or update it as they continue to learn. So for respondents who said they used experience maps or customer journey maps, we asked, “When was the last time your team updated your experience map or customer journey map?”

A chart visualizing the answers to the question, "When was the last time your team updated your experience map or customer journey map?"

I’m happy to see that many teams are frequently updating their experience maps or customer journey maps.

I was happy to see that 19.9% of respondents said in the past week, 29.5% said in the past month, and 26.9% said in the last quarter. That tells me that these artifacts are living documents as intended. 23.8% of respondents said it’s been more than a quarter since they’ve updated their experience maps or customer journey maps.

For respondents who said they used opportunity solution trees, we asked them, “When was the last time you updated your opportunity solution tree?”

A chart visualizing answers to the question, "When was the last time your team updated your opportunity solution tree?"

The number of people who said they’d updated their opportunity solution tree in the past week or month tells me most teams are using opportunity solution trees as intended.

As a general rule, I want to see teams update their opportunity solution tree every 3–4 interviews. If they are interviewing weekly, that’s at least once a month. So I was thrilled to see that 32.6% of respondents said in the past week and 33.3% said in the past month. That tells me that most teams are using their opportunity solution trees as intended.

20.9% of respondents said in the past quarter and 13.1% of respondents said more than a quarter ago. I tend to see teams drop off from updating their opportunity solution tree when they have a hard time building a sustainable interviewing habit. We’ll see if this relationship holds up in the next post when we look at relationships between the variables.

Getting Value Out of Opportunity Solution Trees

One of the most common ways people misinterpret the intent of an opportunity solution tree is they think they can map the opportunity space without talking to customers. This kind of makes sense. Most product teams have some knowledge of their customers’ needs, pain points, and desires.

Teams are constantly inundated with feature requests, sales feedback, new trends from behavioral analytics, and so much more. But these sources rarely contain enough context for a team to really understand the need, pain point, or desire at any depth. It’s hard to truly solve for an opportunity if we don’t fully understand it.

I like to see teams source opportunities from customer interviews and observations, not from previous knowledge (e.g. often assumptions) or internal stakeholders. So we asked teams, “Where did the opportunities on your current opportunity solution tree come from? (check all that apply)”

A chart visualizing answers to the question, "Where did the opportunities on your current opportunity solution tree come from?"

I’m concerned to see how many respondents said their opportunities came from their current knowledge or internal stakeholders instead of directly from customers.

I was very happy to see that 78.9% of respondents said customer interviews and 59.1% said customer observations. That second number really surprised me. I recommend interviewing over observations, because it’s easier to interview every week than it is to observe customers every week. But observations are a fantastic source of opportunities and I’m glad to see that so many teams are doing them.

74.8% of respondents said opportunities came from their current knowledge or a guess of their customers’ needs and 62% said opportunities came from internal stakeholders. These responses concern me. To be clear, I think internal stakeholders and team members can speculate about customer opportunities, but we need to verify that these opportunities are real by hearing about them directly from customers. This tells me that while many teams are adopting the opportunity solution tree as a way to visualize their thinking, they are missing some of the core benefits of the framework.

Only 3.8% of respondents limited the opportunities on their opportunity solution tree to the ones they are hearing in interviews or are observing directly. If you want to dive into why this is so important, start with this article on sourcing opportunities and then read this one about why these other sources of data are not enough.

Another common challenge for people who are new to opportunity solution trees is it’s hard to break their project mindset and develop a truly continuous mindset. Most of us are used to working on several-week or even multiple-month projects. And as a result, we tend to define large opportunities and work with too many at once.

Instead, I prefer that as people move vertically down the opportunity space, they learn to break big opportunities into smaller and more solvable opportunities. When you get good at this, you should be able to discover and deliver teeny-tiny solutions to teeny-tiny opportunities.

Each iteration contributes to a cohesive whole and chips away at the harder opportunities higher up on the tree. This is what unlocks a truly continuous mindset, as you are able to discover and deliver solutions for a single opportunity and then quickly move on to the next opportunity.

To assess how well teams are doing this, we asked, “How many opportunities (needs, pain points, and desires, not solutions) is your team working on right now?”

A chart visualizing answers to the question, "How many opportunities (customer needs, pain points, or desires, not solutions) is your team working on right now?"

I suspect that many teams are taking on too many opportunities at once and not having their desired impact as a result.

Only 14.7% of respondents said 1. This is not terribly surprising. We all still assume we can do way more than we can. Also, as teams are learning how to accelerate their discovery cycles, it might make sense to work on 2 or 3 opportunities at a time. They might be brainstorming new solutions for one, while waiting for assumption tests results for a second, while delivering a third.

For the 26.1% who said 2 and the 39.3% who said 3 or more, if their opportunities are flowing through ideation, assumption testing, and delivery and there’s one opportunity in each stage, that’s fine. But I suspect many of these teams are biting off more than they should. It takes a giant leap of faith to trust that you can have more impact by limiting work in progress and focusing on one thing at a time. Even I struggle with this.

Another key benefit of an opportunity solution tree is it helps us see where we are comparing and contrasting options and where we are getting stuck in whether or not decisions.

Based on what we know from decision-making research, I want to see teams working with multiple solutions for the same target opportunity.

For teams who were working on one opportunity at a time, we asked, “For your current target opportunity, how many solutions is your team currently considering?” And for teams who are working on several opportunities at once, we asked, “For your current target opportunities, how many solutions is your team currently considering on average for each opportunity?”

A chart visualizing answers to the question, "For your current target opportunity, how many solutions is your team currently considering?"

For teams working on one opportunity at a time, the majority are considering one solution.

A chart visualizing answers to the question, "For your current target opportunities, how many solutions is your team currently considering on average for each opportunity?"

For teams working on more than one opportunity, a larger percentage are considering multiple solutions.

These results are interesting. For teams who are working on one opportunity at a time, 44.9% said they are considering 1 solution, 32.3% said 2 solutions, 12.2% said 3, and 10.6% said 4 or more.

Meanwhile, for teams who are working on multiple opportunities at once, 27.3% said they were considering on average 1 solution per opportunity, 40.1% said 2 on average, 20.6% said 3 on average, and 12% said four or more.

I’m skeptical of these results. I find it hard to believe that more teams who are working with multiple opportunities at the same time (40.1% of them) are exploring 2 solutions on average, when only 32.3% of teams who are working on one opportunity are exploring 2 solutions. I have a feeling the “on average” tripped people up here and we may not have collected reliable data. I’ll think about ways to improve that for next year’s survey.

Either way, too many teams are still working with one idea at a time. For teams who are working on multiple ideas, I’d love to learn if these are true compare and contrast situations or if these are multiple solutions in development.

Understanding Where Solution Ideas Come From

Ideas can and should come from anywhere. Research on ideation suggests that the more people involved in ideation, the more ideas we’ll generate. The same research suggests the more ideas we generate, the better ideas we have.

So we asked respondents, “For the opportunities that you are working on right now, who contributed solution ideas? (check all that apply)”

A chart visualizing answers to the question, "For the opportunities that you are working on right now, who contributed solution ideas?"

It looks like a lot of people are contributing solution ideas, but because respondents could select multiple options, the reality is more complicated.

84.1% of respondents said their product manager contributed ideas, 66% said engineers contributed ideas, 63.5% said designers contributed ideas, 52.1% said other stakeholders contributed, 45.3% said customers contributed, and 41.5% said the company’s executives contributed. This makes it look like a lot of people are contributing ideas. But because people were able to select multiple options, the reality is more complicated.

11.7% of respondents chose only one option. For this group, their responses broke down as follows:

  • Just product managers (33.53%)
  • Just designers (10.18%)
  • Just engineers (19.16%)
  • Just executives (16.17%)
  • Just other stakeholders (12.57%)
  • Just customers (8.38%)

18.43% of respondents indicated that product managers, executives, stakeholders, and customers contributed ideas. This group might represent a more traditional model where product managers work with stakeholders to “gather requirements.” It unfortunately leaves designers and engineers out of the process.

6.52% of respondents indicated their product team members were not involved in ideation—only executives, other stakeholders, and customers contributed ideas. It’s clear these teams are not empowered to generate their own ideas.

20.67% of respondents indicated that executives, stakeholders, and customers did not contribute ideas—only their product manager, designer, and/or engineers.

On a positive note, 46.11% of respondents indicated that their product manager, their designers, and their engineers contributed ideas. This is a great indication that we are making progress on truly collaborative product trios.

46.11% of #CDHBenchmarkSurvey respondents said their product manager, designers, and engineers contributed ideas. This is a great indication we are making progress on truly collaborative product trios. – Tweet This

And 10.23% of respondents indicated that all of these roles contributed ideas. As long as the product trio is empowered to test and decide which ideas to pursue, I think this is optimal. I like to see ideas coming from everywhere.

Evolving Solutions: Getting to Good Ideas

When a team is engaged in continuous discovery, their solution ideas should be constantly evolving. Bad ideas should be thrown out. Mediocre ideas should be improved. It’s rare that a good idea is simply found. Instead, we have to work to evolve our ideas into solutions that might work for our customers.

The best way to learn what’s working with an idea and what’s not is to run assumption tests. We asked respondents, “How many assumption tests or product experiments did you run last week?”

A chart visualizing answers to the question, "How many assumption tests or product experiments did your team run last week?"

Assumption testing is one of the hardest habits to build, so I’m not surprised to see the majority of teams didn’t run any tests last week.

A whopping 70.6% of respondents said zero, 26.6% said 1–3, 2.1% said 4–6, and 0.7% said 7 or more. This is not surprising. We see this same trend with the teams that we work with and in our community.

Assumption testing is one of the hardest habits to build. We still have a long way to go here. If you want help building this habit, check out our Identifying Hidden Assumptions and Assumption Testing courses.

An overwhelming majority of #CDHBenchmarkSurvey respondents (70.6%) did not run a single assumption test in the prior week. This is one of the hardest discovery habits to build. – Tweet This

For the folks who answered 1 or more, we also asked, “What types of assumption tests or product experiments did you run last week? (check all that apply)”

A chart visualizing answers to the question, "What types of assumption tests or product experiments did your team run last week?"

The most common type of assumption test or experiment is a prototype test, followed by data mining.

I was not surprised to see prototype testing come out on top. Here’s the breakdown:

  • A prototype test—a simulation designed to evaluate customer behavior (55.4%)
  • Data mining—use of existing data (e.g. behavioral analytics, sales notes, etc.) to evaluate an assumption (43.5%)
  • Research spike—an engineering prototype designed to test a feasibility assumption (38.6%)
  • A/B or multivariate testing—a live production test where you compare the performance of 2+ options (32.6%)
  • A one-question survey—a short survey typically embedded within a product (25%)
  • Other (5.1%)—about half of these responses were customer interviews and the rest were specific types of prototype tests like comprehension tests, usability tests, concierge tests, etc.

As I mentioned in the opening of this section, good discovery teams should be exploring many ideas and throwing many of them away. We asked all respondents, “When was the last time your team discarded a solution that your team was considering?”

A chart visualizing answers to the question, "When was the last time your team discarded a solution that your team was considering?"

I’m happy to see that a large percentage of teams said they discarded an idea in the past week or month.

Here are how the responses broke down:

  • In the past week (16.8%)
  • In the past month (29.7%)
  • In the past quarter (22.7%)
  • More than a quarter ago (13.4%)
  • Never (17.3%)

I’m happy to see that 46.5% of respondents said they discarded an idea in the past week or month. I’m a little dismayed (although not surprised) that 30.7% of respondents either never discarded an idea or haven’t done so in more than a quarter.

Ideas are cheap. We know exploring many ideas is what gets us to good ideas. We need to be throwing out lots of ideas if we want to better serve our customers.

But assumption testing doesn’t just help us eliminate ideas, it also helps us evolve our mediocre ideas into good ideas. So we also asked, “When was the last time you iterated on a solution based on something you learned from an assumption test or product experiment?”

A chart visualizing answers to the question, "When was the last time you iterated on a solution based on something you learned from an assumption test or product experiment?"

I’m glad to see that a large percentage of teams have iterated on their solutions in the past week or month, but dismayed that so many have either never iterated on an idea or haven’t done so for more than a quarter.

16.6% of respondents said “In the past week,” 28.1% said “In the past month,” 18.5% said “In the past quarter,” 14.1% said “More than a quarter ago,” and 22.7% said “Never.” Again, I’m glad to see that 44.7% of respondents said they iterated on their ideas in the past week or month. And I’m dismayed that 36.8% have either never iterated on an idea or it’s been more than a quarter.

This tells me that teams who are assumption testing are using their tests to both throw out bad ideas and to iterate on their mediocre ideas. However, it also tells me that a lot of teams still need help with this habit and that far too many teams are simply building their initial ideas without any evolution. It’s no surprise, then, why so many solutions simply don’t work.

Discovery Tools: Who is Using What

If I think back 15–20 years ago and consider what we had to do to connect with customers, test our ideas, and tabulate results, we have come a long way. Gone are the rooms with two-way mirrors. We no longer spend $30,000 to rent a facility to run a single usability study. We aren’t limited to recruiting participants in our nearby geographic region.

Instead, we have access to amazing discovery tools that help us recruit participants, conduct interviews around the world, launch in-product surveys, design unmoderated prototype tests, and so much more.

We wanted to learn who was taking advantage of these tools. So we asked, “What tools does your team have access to to support your discovery work? (check all that apply)”

A chart visualizing answers to the question, "What tools does your team have access to to support your discovery work?"

Product teams have more tools than ever at their disposal to support their discovery work.

Here’s what respondents said they have access to:

  • Notes from sales conversations, call center logs, or other customer-facing communications (60.5%)
  • A digital whiteboard (54.2%)
  • User behavioral analytics (47.7%)
  • A research repository (29.4%)
  • A one-question survey tool (26%)
  • An interview recruiting tool (22.5%)
  • An unmoderated testing platform (21.2%)
  • User search queries (19.5%)
  • Other (6.2%)

Over 50% of the “Other” responses were “none of the above.” I should have added that as an option. Other tools that came up under “Other” included heat maps, long-form survey tools, design tools like Figma and InVision, and Google.

I am surprised to see digital whiteboards have such widespread adoption. This is a testament to Miro and Mural for really building this market over the last few years. I was surprised and dismayed to see that fewer than half of the respondents had access to user behavioral analytics and less than 20% had access to search queries.

I’d love to see the market for all of these tools continue to grow. They really do help us shorten our discovery cycles and that’s better for everyone.

What’s Next?

As I wrote in the first post about this survey, I had two questions I wanted to answer with this survey:

  1. how many people were adopting each of the continuous discovery habits that I outlined in my book.
  2. if teams with better habits were more likely to have success than teams who haven’t developed their habits.

We’ve tackled the first question in this post. It is clear that many teams are working on adopting the different discovery habits. And I hope to see even more progress next year.

In future posts, I’ll dive into the second question. I have a series of hypotheses that I outlined when designing the survey that I’ll evaluate so that we can get a better idea of how different habits impact team success. If you don’t want to miss those posts, be sure to subscribe below.

The post Insights from the CDH Benchmark Survey: How Are Teams Adopting Discovery Habits? appeared first on Product Talk.


Insights from the CDH Benchmark Survey: How Are Teams Adopting Discovery Habits? was first posted on March 1, 2023 at 6:00 am.
©2022 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.

Product in Practice: Building to Learn at Singularity Creations

Saying no is a huge part of excelling in your product practice. If you say yes to every request that comes your way—whether from customers, salespeople, or executives—you can quickly get overwhelmed and lose focus.

But this isn’t always easy. Product people naturally want to solve any problems they’re presented with. Especially when you know it’s just a small request that wouldn’t take too much time to build.

Of course there’s always a tradeoff. Anytime you say yes to one task, it means you’re saying no to—or at least postponing—several others.

Anytime you say yes to one task, it means you’re saying no to—or at least postponing—several others. – Tweet This

So how do you know when to say no? And what happens if you change your mind somewhere down the line and want to change your answer to yes?

Continuous discovery can help. Since you’re starting with a clear desired outcome, it’s easier to say no to any requests that aren’t aligned with your outcome. And using tools like the opportunity solution tree can help you visualize your thinking so stakeholders better understand what you’re focusing on and why. This means you can spend less time telling them why you’re not moving forward with a random request and focus your attention on activities that impact your outcome.

Today’s Product in Practice showcases how one product manager, Helena Jeret-Mäe, initially said no to the idea of building a particular feature stakeholders had been requesting. But by using continuous discovery habits to slowly chip away at the opportunity, she eventually ended up in a place where she was comfortable saying yes.

Have your own Product in Practice story you’d like to share? Submit your story here.

Meet the Continuous Discovery Champion, Helena Jeret-Mäe

A headshot of Helena Jeret-Mäe

Meet the continuous discovery champion, Helena Jeret-Mäe.

Helena is a product and project manager at Singularity Creations, where she wrangles a few different apps while wearing different hats. Her main focus (and what we’ll be exploring in detail in this Product in Practice) is an asynchronous coaching platform that gives coachees the flexibility to engage with coaching content whenever and wherever it’s convenient. Their current customers operate in the field of education where they coach their clients (who are teachers) on how to use their educational program and tools.

When Helena joined her company in the spring of 2021, she says the product delivery process involved building a lot of hypothetically relevant features until they ran out of steam because they couldn’t increase adoption. It tended to be a one-dimensional flow of information from the client to the product team. Helena quickly recognized something needed to change: “I knew I wanted to nudge them away from that and I started realizing I must not be the first person to face this problem.” So she began an educational journey, listening to podcasts and reading about product discovery best practices. Eventually she found Continuous Discovery Habits and signed up for the Product Talk Opportunity Mapping course.

The Old Way of Working: A Brief Example

To help illustrate how continuous discovery has transformed her approach to work, Helena says, “I have this one particular thing that’s still in the product that’s completely useless.” Early in her time at Singularity Creations, a customer asked them to add a field to the coaching platform to hold a link. “I realized at the time that I wanted to say no, but I didn’t have any usable information from actual use cases and the customer was hypothetically saying they would need it.” So they built the field. “And to this day it’s never been used,” says Helena.

Looking back on this experience, Helena sees how her perspective has changed. “In my mind, I didn’t have this model where we could say, ‘This is an idea. Let’s break it down. How can we tell if it would be worth it for the customer?’” And since it was a small ask—just one field—they added it. And if nothing else, this box helps reinforce the importance of continuous discovery to Helena. She says, “It doesn’t bother anyone, but it reminds me that at that time I didn’t have a way to process incoming requests in a context which helps to compare and contrast them in a structured way.”

Using Continuous Discovery to Break a Big Problem into Small, Manageable Pieces

Helena says the Product Talk Opportunity Mapping course led to a breakthrough in the way she thought about discovery: “I felt like I finally had a way to structure things in my head.” And what stood out to her most was the iterative nature of continuous discovery. Instead of tackling one big problem with one big solution, she could break it into small opportunities which she could validate through quick tests. Let’s dive into one specific feature to see how Helena applied these concepts to her work.

With continuous discovery, I felt like I finally had a way to structure things in my head. – Tweet This

“We had learned about a year ago that the users in the system needed to nudge other users they were coaching who were falling behind,” says Helena. At the time, there was no way to do that in the system. Coaches could email coachees outside of the system, but since the information on what they were falling behind on and by how much was in the system, it made sense to add some capability there.

A simple opportunity solution tree diagram with "Increase efficiency of coach process and group management" at the top and two opportunities below it.

This simple opportunity solution tree captures how Helena was thinking about messaging at the beginning of this process.

At that time, Helena made the conscious choice not to build a full-fledged messaging system within the platform. She looked for small, cheap ways to add the nudging feature and validate the idea. “We gave the users a small box and a text button for each user they needed to nudge,” explains Helena. “Our developer was smart enough to say we could bootstrap this solution upon notifications that already live in the system and notify the users about certain events, like something was completed or something was added. So what we did was have the users send notifications to each other disguised as messages.”

In this first iteration, it was just a text box with a 500-character limit and a send button. Users couldn’t see what they had sent, so there was no history. The recipient of the message could only read the message and couldn’t respond. This feature was well received—the coaches were thrilled to be able to communicate with coachees in this manner.

A screenshot of the chat feature. The title reads "Message helenacc1 in Helena group A." There's a text box below it, and a button that says "send" below that.

In its first iteration, the chat feature was just a text box with a 500-character limit and a “send” button.

After this first iteration, Helena spent time observing the users’ actions in LogRocket, where she saw that they copied and pasted the same message for everybody that they had to send a message to. Even though users hadn’t complained about this explicitly, Helena added the ability to send a message to an entire group in the next iteration. It still didn’t have all the features of a group chat—it was still based on notifications, so it wasn’t possible to show the history of messages a coach had sent and there was no back and forth history in the chat. “It was clunky, but it worked,” says Helena.

Around this time, Helena began talking to customers to see how they used it. And what started coming up was when the user had sent a message, they would say, “I can’t remember when and what exactly I had sent to this person.” At this stage, Helena went back to the drawing board and asked, “If we had to make it so they could see the history and those messages they had sent, then what kind of solution would make sense?”

The product team found themselves at a crossroads: Should they continue bootstrapping or should they actually build a messaging system? By this point, they’d made several small iterations on the bootstrapped version and they’d had the opportunity to talk with users and observe their sessions in LogRocket. Explaining the decision, Helena says, “We decided that we would ditch the bootstrapped version and go for the actual real-time chat (using WebSocket), which is built so the messages and notifications about messages will be sent instantaneously, like in Facebook Messenger.”

A screenshot of the chat feature. There is a list of groups on the left and a close-up of one chat on the right, displaying the history of all the interactions between one coach and their coachee.

The chat feature has evolved to be much more complex. Users can now message multiple people and groups and see the history of their past interactions.

Reflecting on this journey, Helena is grateful that she was able to apply the iterative nature of continuous discovery. “Had I decided a year ago that I needed to make a messenger, it might have taken us a long time and a lot of effort to build it without really knowing if we were going to need it and what it should actually look like. So that’s one of the wins—I didn’t make this big leap from this one tiny piece of information to a fully fledged, full-blown system of managing messages between users. We started really small and it just led us in that direction at the right time and with just enough information so we’d know how much to build.”

We started really small and continuous discovery just led us in that direction at the right time and with just enough information so we’d know how much to build. – Tweet This

How Stakeholders Have Responded to The New Way of Working

Helena has been amazed by the responses she’s received from various stakeholders. When they made the decision to switch from bootstrapping to building the full messenger, the engineer was fully on board. “Building it wasn’t that hard and I think part of the reason was that the developer also didn’t have to come up with the solution all at once. He had been iterating on it, he knew the feedback that was coming in, so he actually had time to ponder in the background and determine the best technical solution for it.”

Similarly, when presenting her plan to company leadership and the client who is investing in the development of the app, Helena says, “It was the easiest buy-in I’ve ever witnessed.” There was a lot of nodding and people saying things like, “This is brilliant!” and “This makes a lot of sense!”

A screenshot from a slide deck labeled "Product discovery case study: messaging." Under each month (August, September, and October) is a list of discovery work that Helena conducted.

Helena summarized her main discovery activities into this slide for stakeholders. They loved how easily it connected her discovery work to business outcomes.

Helena admits that her “evil plan” was to get stakeholders on board with the structure and thinking of continuous discovery so that when they’re making decisions down the line, it will be easier to remind them that they’d all agreed to work this way. “I didn’t ask for permission—I said we are doing this and these are the results, so I got them on board fairly easily,” she explains.

Reflecting on why this process was so seamless, Helena says, “With the discovery process as Teresa has set it up, there’s a narrative, a structure, and a logic. And you can use that to build a story to tell.”

With the discovery process as Teresa has set it up, there’s a narrative, a structure, and a logic. And you can use that to build a story to tell. – Tweet This

Key Learnings and Takeaways

As she considers her continuous discovery journey so far, Helena has made a few key observations.

  • Continuous discovery frees up mental capacity.

Having a mental model to guide her work frees up mental space. “If you need to build something big, it takes a lot of mental work to build up the model that you need to create, it takes a lot of energy to keep it in your mind, and it takes a lot of energy to manage everything around it,” says Helena. But using the opportunity solution tree and other continuous discovery tools eases that burden. “I don’t need to keep huge models in my head, so that alleviates the mental pain.”

  • Making smaller decisions is better for your product—and your own well-being.

When you make one big decision to build a feature, you tie your hands because then you have to make 100 small decisions while trying to deliver that one bigger thing. “I get anxiety over that,” says Helena. “I feel that it eats at me, so it’s not very good for mental health.” With continuous discovery, though, you’re just making a series of very small decisions while still making a big impact. “We can alleviate a lot of the everyday user pain by doing the small thing.”

  • Learning to build just enough is a skill.

“It’s difficult to restrain yourself at first,” says Helena. “But eventually you learn to ask yourself, ‘How do we know that we have to build this?’” With practice, you learn to evaluate ideas and de-scope them. “Over time, it becomes easier to do less.”

It’s difficult to restrain yourself at first. But eventually you learn to ask yourself how you know you need to build something. Over time, it becomes easier to do less. – Tweet This

  • It’s important to find the balance between process and the actual content of your work.

“I love the saying, ‘process is the product,’” says Helena. “Whatever process you use will give you the result. So if you don’t like the result—or the product—you should look at the process.” And continuous discovery really brings process into focus, which ultimately makes it easier to work on the content part of your job.

Helena first shared this story with the Continuous Discovery Habits community. Want to hear more stories like this one on a regular basis and get the inspiration and motivation you need to apply continuous discovery to your own work? Come join us!

The post Product in Practice: Building to Learn at Singularity Creations appeared first on Product Talk.


Product in Practice: Building to Learn at Singularity Creations was first posted on January 18, 2023 at 6:00 am.
©2021 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at teresa@producttalk.org

Discovering Solutions: Quickly Determine Which Ideas Will Work (And Which Won’t)

A visual depiction of a decision tree with a desired outcome as the root, followed by opportunities, solutions, and experiments.

Visualizing discovery work with an opportunity solution tree has been a game-changer for both me and the teams that I work with.

It’s a simple visual that packs a lot of power.

At the top, we start with a clear outcome. This sets the scope for our discovery.

From there, an effective team is doing two key research activities week over week. They are interviewing to discover opportunities and they are assumption testing to discover the best solutions.

Interviewing is generative. Assumption testing is evaluative. We need both.

If we interview well, we’ll hear about a myriad of unmet customer needs, pain points, and desires—collectively called opportunities. But we won’t know if we can deliver on those opportunities until we find the right solution. And assumption testing is the best way to do that.

Interviewing is generative. Assumption testing is evaluative. We need both. – Tweet This

Over time as we continuously deliver solutions, we address more of the opportunity space, and we have more of an impact on our outcome.

This simple formula is how we create customer value (by delivering on opportunities) in a way that creates business value (by driving our outcome).

There’s only one problem. Most teams don’t work this way.

Reimagining the Way We Work

A photograph of a slightly unwound cassette tape against a yellow background.

Most of us are stuck in outdated ways of working and it takes time to change.

Most of us grew up in a world where we were asked to deliver specific outputs. Our jobs have been defined by racing toward impossible deadlines and managing infinite feature requests.

As a result, we can’t expect to change the way we work overnight. We have to learn new skills. We have to adopt new mindsets. We need to practice new ways of working.

Most of us grew up in a world where we were asked to deliver specific outputs, race toward impossible deadlines, and manage infinite feature requests. We can’t expect to change the way we work overnight. – Tweet This

Since 2013, Product Talk has helped teams do exactly this. I started with my 12-week coaching program. I coached teams week-over-week in the context of their own work. It was a powerful way of helping teams adopt new mindsets and invest in new habits. I have loved every minute of it.

But as I wrote in my January post, coaching doesn’t scale well. So in 2017, I released my first online course, Continuous Interviewing. Over the last 5 years, we’ve offered this course 20 times and have had more than 750 students go through this program.

I started with this course because I strongly believe that continuous interviewing is the keystone discovery habit that helps to unlock the other discovery habits.

In 2020, we expanded our course offerings. I partnered with Hope Gurion, who designed and teaches our Defining Outcomes course, and I designed and continue to teach Opportunity Mapping.

These three classes—Defining Outcomes, Continuous Interviewing, and Opportunity Mapping—cover the top half of the opportunity solution tree. Defining Outcomes helps you choose the best outcome to start with. Continuous Interviewing teaches you how to collect specific customer stories and how to identify opportunities from those stories. And Opportunity Mapping teaches you how to map out the opportunity space, giving you a big picture view of how you might reach your outcome.

In January 2021, I announced that we’d be releasing new courses to help you work the bottom of the tree—teaching you how to discover solutions. But that didn’t happen.

As our course business grew, I got more and more dissatisfied with the tools that we were using. Little things weren’t working well. Our course platform didn’t provide adequate receipts for most companies in the European Union. Email updates were hard to process because we had to manually update students’ emails in multiple tools. We saw an increase in Zapier errors as we relied on unreliable webhook implementations—basically we were stitching our tools together in unreliable ways. Too many of these problems were affecting our students. This wasn’t how I wanted to run my business.

This isn’t the sexy side of running a business, but if you want to create a great course experience, it’s what’s required given the tools that we have to work with today. I often compare the online course platform world today to the blogging world in 2003. You can do it, but if you want it to be a seamless experience, you better have some tech chops.

So instead of launching new courses, I spent 2021 rebuilding my tech stack. I dusted off my coding skills, dove into the AWS world, and started using APIs to better stitch the tools and services we use together.

Today, it’s not perfect. We are still limited by the APIs that are available to us. I desperately wish Slack would offer API support for user management at a price point that worked for communities. I equally wish Miro would allow me to create a new board from a template via their API. But alas, these things do not exist. And so we make do. But we are in a much better position than we were a year ago and we are ready to grow our course business.

And that’s why today, I’m thrilled to announce that we’ll be offering two new courses this spring and summer—both designed to help you discover solutions.

I’m thrilled to announce that we’ll be offering two new courses this spring and summer—both designed to help you discover solutions. – Tweet This

Discovering Solutions Quickly, Iteratively, and Sanely

A photograph of a computer monitor and keyboard and a person's hand on a mouse. Next to the monitor there are two hourglass timers.

When it comes to discovering solutions, timing is everything. Our goal is to discover if we are building the right thing before we design or build it—not after.

Marty Cagan argues that the best product teams complete 12–15 discovery iterations every week. He defines an iteration as something we do to further our understanding of what to build. To most teams, this sounds like an impossible pace.

But that’s because most teams are still operating from a project mindset. We test our ideas by prototyping the entire solution—doing all of the design work before we learn if it’s the right thing to build. Or we build the entire solution, relying on A/B testing to tell us if we built the right thing after we’ve already built it.

Prototyping and A/B tests are invaluable tools in our toolbox. The problem is not with the tools. It’s with how we are using them. Our goal with discovery is to determine if we are building the right thing before we design or build it—not after.

Our goal with discovery is to determine if we are building the right thing before we design or build it—not after. – Tweet This

To do this, we need to return to an idea that Eric Ries introduced over ten years ago in The Lean Startup. We need to stop testing whole ideas and instead shift our focus to testing the assumptions that need to be true in order for our ideas to work.

But knowing that we need to test our assumptions is rarely enough. With solution testing, in particular, it’s really easy to be blind to our own assumptions. They feel so true that we don’t even notice that we are making them.

So to shift from idea testing to assumption testing, we have to become cognizant of the assumptions we are making in the first place. This is harder than it sounds.

To shift from idea testing to assumption testing, we have to become cognizant of the assumptions we are making in the first place. This is harder than it sounds. – Tweet This

It’s complicated by the fact that we work in teams. Each person on our team likely has different (even if only slight) interpretations of what our solutions mean—how they’ll work, what they’ll look like, where they’ll live. And as a result, we are each making different assumptions.

Before we can start testing our assumptions, we need to not only understand the assumptions we are individually making, we have to also align with our teammates on what our solutions mean and then generate assumptions together.

As of today, we are announcing two new courses: Identifying Hidden Assumptions and Assumption Testing—both designed and taught by me.

Identifying Hidden Assumptions: Uncover the Risky “Gotchas” That Can Derail Your Product Ideas

This course is designed to help you see the hidden assumptions that are lurking behind your ideas.

In this five-week course, you’ll learn how to:

  • Create story maps to help you align as a team on what your ideas mean and how they might work.
  • Use your story maps to identify desirability, usability, and feasibility assumptions.
  • Walk the lines of your opportunity solution tree to uncover viability assumptions.
  • Avoid unintended consequences by intentionally exploring where potential harm might creep into your solutions.
  • And quickly identify which assumptions to prioritize and test.

We’ll be offering this course in May and September. Each cohort is limited to 50 students and is first-come, first-serve. Learn more here.

Assumption Testing: Quickly Determine Which Ideas Will Work and Which Won’t

This course is designed to help you quickly test your assumptions, allowing you to collect the compare and contrast data you need to make good decisions about what to build.

In this five-week course, you’ll learn how to:

  • Match the right research method to any given assumption.
  • Guard against confirmation bias and team disagreements by defining success upfront before you run your tests.
  • Coordinate iterations and variations to ensure your results will represent what you’ll really see in practice.
  • Make better decisions about what to build by understanding which assumptions are reliable and which need to be avoided.

We’ll be offering this course in June and October. Each cohort is limited to 50 students and is first-come, first-serve. Learn more here.

If you are interested in honing your skill and developing your discovery habits, I hope you’ll join us. You can find all of our course offerings here.

The post Discovering Solutions: Quickly Determine Which Ideas Will Work (And Which Won’t) appeared first on Product Talk.


Discovering Solutions: Quickly Determine Which Ideas Will Work (And Which Won’t) was first posted on March 16, 2022 at 6:00 am.
©2021 "Product Talk". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at teresa@producttalk.org

Adopting a Continuous Discovery Mindset Across the Organization

I teach two courses at Northwestern University. I want to share a little bit more about the work that I am doing there as I think it’s relevant for people working at companies who are starting to adopt continuous discovery practices (in other words, many of you).

The first course focuses on teaching practitioners how to use design methods to solve business challenges related to learning and organizational change. Students learn the same design process I teach to the product teams that I coach. Our students are a mix of full-time graduate students and working professionals who pursue a graduate certificate while working.

The second is an executive education course designed to help leaders understand how their role changes as their organization moves toward continuous discovery. They learn how to manage by outcomes and how to co-create change. This course is a 3-day seminar and our students are working executives. I co-teach both courses with a fantastic co-instructor, Jeff Merrell.

I love teaching these courses because it’s a way for me to apply my coaching curriculum to a new context. Instead of using continuous discovery to design products and services, our students are using continuous discovery to address internal people challenges like employee engagement, team effectiveness, and diversity and inclusion. I’ve been blown away by how effective the same approach works in this new context.

It got me thinking about how when companies first started adopting Agile methodologies on their software engineering teams, they quickly ran into obstacles with the rest of the organization. It’s hard to be Agile when the rest of the business wants fixed delivery dates or wants to dictate requirements. The same is true when we shift to a more continuous mindset.

Just as it’s hard to be Agile when the rest of the business isn’t, the same is true when we shift to a more continuous mindset. We can’t truly work as cross-functional, continuous teams if the rest of the organization doesn’t also adopt this mindset. – Tweet This

We can’t truly work as cross-functional, continuous teams if the rest of the organization doesn’t also adopt this mindset. And so I love getting to experiment with how this mindset can apply to broader business challenges.

As part of the preparation for our executive education course, I wrote an overview of the design approach we’ll be using. It will look very familiar to you. I want you to see how the work that we do is not just about product, it’s about solving problems and making good decisions, and is applicable broadly across business.

The work we do with continuous discovery is not just about product. It’s about solving problems and making good decisions, and is applicable broadly across business. – Tweet This

Also, if you have leaders in your organization who want to learn more about how their role changes when they manage teams who adopt a continuous discovery mindset, refer them to our upcoming seminar.

Before we dive in, some notes on terms:

  • ELOC stands for Executive Learning and Organizational Change
  • DOEC stands for Designing for Organizational Effectiveness Certificate

The DOEC / ELOC Design Approach

Two people standing looking at different text and images pasted on a whiteboard

Many business challenges can also be defined as design challenges.

It’s easy to think about design thinking in the context of designing products and services, but it applies equally well to broader business challenges.

Design challenges are challenges where elements of the problem space are still unknown, there are multiple criteria for evaluating solutions, and the solver has to make many judgment calls when designing a solution. In other words, many business challenges are design challenges.

When we set out to increase employee engagement, improve team efficiency, or expand knowledge sharing in our organizations, we are tackling design challenges. We don’t always know what is causing a drop in employee engagement or why teams aren’t as effective as they could be or what the barriers to knowledge sharing might be. We need to start by framing the problems that impact these challenges.

We may not realize it, but when we set out to increase employee engagement, improve team efficiency, or expand knowledge sharing in our organizations, we are tackling design challenges. – Tweet This

Similarly, there is no silver bullet that will fully address each of these challenges. Instead, we have to patch together a mish-mash of solutions that each address different parts of the problem space. We have to use our judgment to assess what those solutions might be, and how I assess one solution might be different from how you would assess that same solution.

Finally, we are rarely—if ever—done. While we may see improvements as we explore solutions, we can always get better. These are evergreen challenges that require a persistent effort. It’s easy to feel like we are playing whack-a-mole where once we address one target area, the next one pops up.

These open-ended business challenges can benefit from taking a design approach.

What We Mean by A Design Approach

 A yellow road sign with the text "End" written on it

A good design approach starts with the end in mind.

A good design approach starts with the end in mind. We start by defining the outcome we are trying to achieve. The easiest way to do this is to look at our challenge and ask, how will we know when we’ve solved it? The answer to that question is your conceptual start to defining a desired outcome. We’ll explore this more in the “Shifting from Outputs to Outcomes” section.

A good design approach starts with the end in mind. We start by defining the outcome we are trying to achieve. The easiest way to do this is to look at our challenge and ask, how will we know when we’ve solved it? – Tweet This

We then need to take time to define the problem space. Remember design challenges are ones where elements of the problem space are unknown. We may not know why morale is low or why employees are guarding their knowledge. Even worse, we may think we know why morale is low and end up solving the wrong problem. We need to spend time discovering what’s happening right now in the organization before we can solve for tomorrow.

When discovering what’s happening right now, it’s easy to focus on problems and miss the bright spots in the organization—the areas where the problems are already solved. To help you remember to look for both problems and bright spots, we reframe the “problem space” as the “opportunity space.” Think about opportunities as both problems and bright spots. We have an opportunity to solve the problems and replicate the successes. We’ll explore this more in the “Discovering Opportunities” section.

Finally, we need to discover the solutions that might address those opportunities. With a design approach, we encourage you to co-create with the people who will be impacted by your solutions, by iteratively prototyping until you find something that works. We’ll cover this more in the “Discovering Solutions” section.

While I described this process linearly, it’s not a linear process. Design is a messy, iterative process that often loops back on itself. When we learn that a particular solution doesn’t work, we also learn something new about the opportunity space. As we explore the opportunity space, we learn more about the outcome we are trying to drive. Each step feeds back into the previous step. We’ll explore how to track this messy process in the “Managing Your Progress” section.

Shifting from Outputs to Outcomes

Several white arrows nailed to a wall

We sometimes feel like we’re making progress because we have outputs. But it’s important not to mistake outputs for outcomes.

Suppose you are trying to increase knowledge sharing in your organization. You might be considering programs like rolling out an employee social network (ESN), starting a lunch-and-learn speaker series, and pairing novices with more senior experts through an apprenticeship program.

In many organizations, we assign teams, set goals, and kick off projects to roll out each of these programs and we feel like we’ve made progress. But have we?

It’s quite possible that all three programs will fail. Nobody uses the ESN, attendance to the lunch-and-learn series is enthusiastic at first but quickly wanes, and the apprenticeship relationships are superficial at best.

Each of these programs is an output. It’s something that we produce to achieve an outcome—in this case expanding knowledge sharing. But if we don’t measure the impact of these programs on that desired outcome, we are simply assuming that we’ve made progress.

In ELOC / DOEC, when we talk about taking a design approach to our business challenges, we start by encouraging you to shift from an output mindset to an outcome mindset. We still need to produce outputs to accomplish our outcome, but producing outputs alone is not enough. We aren’t done until we’ve accomplished our outcome.

This is one of those ideas that seems so simple that we often miss the insight. It’s easy to think we are focused on the outcome, after all, that’s why we set a goal in the first place. If we set a goal to increase knowledge sharing in the organization, does that mean we have an outcome focus?

Not necessarily. The value of shifting from an output focus to an outcome focus comes when it’s time to design and implement programs. If we are asking our teams to implement specific programs, we are communicating that it’s the outputs rather than the outcome that we care about.

Being outcome-focused means we ask our teams to deliver outcomes rather than outputs. We give them the autonomy to find the best path to those outcomes themselves. – Tweet This

Being outcome-focused means we ask our teams to deliver outcomes rather than outputs. We give them the autonomy to find the best path to those outcomes themselves. We may suggest outputs (i.e. programs or initiatives they might consider), but we give them the freedom to explore those suggestions alongside their own. Most importantly, we measure their success by whether or not they achieved the outcome, not whether or not they delivered the outputs.

This output vs. outcome distinction can be applied to many business challenges. Here are a few examples:

Outcome Outputs
Increase employee engagement Career progression programs
Team offsites
Improve team effectiveness Team charters
Training on giving effective feedback
Communication style assessments
Expand knowledge sharing Employee social networks
Communities of practice
Employee resource groups
Lunch-and-learn series
Apprenticeship programs
Increase diversity and inclusion Unconscious bias training
Anonymized résumé screening
Sponsorship programs

However, the distinction between outcomes and outputs is not always this clear. It’s easy to define an outcome that implies a specific output. For example, we might define our outcome as “Increase attendance to team charter trainings.” This is a quantifiable goal that we could imagine assigning to a team. We may give them complete flexibility on how they might reach that goal. They could host more trainings, raffle off prizes to those who attend, share success stories from those who did attend, etc.  Based on our definition so far, this is a good outcome.

There’s only one problem. What if the team charter trainings aren’t effective? Increasing attendance doesn’t get us anywhere. So we might want to widen our lens and set our outcome as “increase the use of team charters.” This would allow the team to explore other ways to get teams to adopt team charters other than attending the trainings.

But what’s the value of team charters? They help improve team effectiveness. But what are other ways of improving team effectiveness? You can see where this is going.

Think about the transition from outputs to outcomes as two ends on the same spectrum. When we talk about shifting from an output mindset to an outcome mindset, we want you to shift right on that spectrum. But how far right you shift will depend on the scope of your team. If we carry out our current example, shifting right on the spectrum might look like this:

  • Host more trainings on team charters.
  • Increase attendance to team charter trainings.
  • Increase the use of team charters.
  • Improve team effectiveness.
  • Improve productivity.
  • Reduce costs.

Your executive leadership team might be focused on reducing costs. Your COO might be particularly interested in improving productivity. Your Head of HR might be interested in improving team effectiveness. An HR Business Partner might be working with her teams to increase the use of team charters. And so on.

It’s important to take the team’s span of control into account when setting their outcome. If you set the outcome too far to the left, you’ll be dictating specific outputs. But if you set the outcome too far to the right, the team will consider outputs that are too far outside of their remit. Adjust the outcome to match the scope appropriate for the team.

Discovering Opportunities

 A woman walking along a road carrying a lot of colorful balloons

Discovering opportunities means looking for pain points and bright spots.

With an outcome in mind, our goal is to get a clear picture of what’s happening in the organization today. We want to discover the opportunities—needs, pain points, desires, wants—that if we addressed them would drive our desired outcome. Remember, opportunities include both problems that we find, but also bright spots that we can replicate.

We discover opportunities by interviewing the people affected by our challenge. If we continue with our knowledge sharing example, we might interview employees about times when they needed knowledge and didn’t know where to get it (the problem mindset) and times when they shared knowledge they had or a coworker shared knowledge with them (the bright spot mindset).

As we collect these stories, for the problem stories, we want to listen for needs and pain points that emerge from the stories. For the bright spot stories, we want to listen for desires or wants that were met that we might replicate. In both types of stories, we want to listen for moments in time where we can either intervene or replicate success.

As we collect stories of problem areas and bright spots, we want to listen for specific moments in time where we can either intervene or replicate success. – Tweet This

It’s easy to get overwhelmed by the sheer size of an organizational challenge. By collecting stories, we can identify specific moments in time where we might intervene. This allows us to start small and build momentum over time. Instead of framing the problem as “employees don’t share their expertise with others,” a challenge that is too big and vague to solve effectively, we might identify a series of moments in time where this problem occurs:

  • A new sales rep needs help overcoming a tough objective, but the senior reps are too busy with their own clients to help.
  • A senior sales rep doesn’t know what products and services are in the pipeline because the roadmap is always out of date.
  • The finance team is struggling to put together an accurate forecast because the sales reps are reluctant to commit to their forecasts.

Each of these moments in time is much easier to solve for than the general problem that employees don’t share their knowledge with each other. By interviewing and observing what’s happening in the organization today, we can turn a big, messy problem into a series of smaller, solvable problems.

By interviewing and observing what’s happening in the organization today, we can turn a big, messy problem into a series of smaller, solvable problems. – Tweet This

Discovering Solutions

Two hands holding up two different shaped frames

One of the keys to discovering solutions is setting up compare and contrast decisions.

Armed with a clear picture of what’s happening in the organization today, we can now turn to solutions. There are two key principles we focus on when exploring the solution space. The first is to set up compare and contrast decisions and the second is to test and iterate on solutions. We’ll look at each in turn.

The human brain is remarkably good at closing the loop—when we hear about a problem, we jump to a quick solution. The problem is these first solutions are rarely the best solutions.

Additionally, when we work with one solution at a time, we set ourselves up to fall prey to confirmation bias—we see the evidence that supports our solution and we miss the evidence that doesn’t support it. To avoid this fate, we want to consider multiple solutions for the same opportunity, setting up compare and contrast decisions.

In most organizations, if the product team isn’t sharing the roadmap with the sales team, the quick and easy solution is to schedule a recurring meeting where the product team updates the sales team. However, these meetings are time expensive. Is this really the best solution? To answer that question, we want to consider multiple solutions and compare and contrast them against each other.

We can use rapid prototyping to quickly test and iterate on our solutions, giving us the data we need to effectively compare and contrast our consideration set. It’s easy to confuse prototyping with pilot programs, but a key difference is with prototyping we focus on simulating (not implementing) the experience we aim to create with our solution. Because we aren’t focused on implementation, we don’t have to simulate the whole experience, but can instead focus on the parts that represent the most risk. This not only enables us to move quickly, it also helps us collect more reliable feedback on our solutions, because the people they are intended for get to experience them, not just hear about them. They get to co-create with us.

As we iteratively test the riskiest parts of our solutions with the folks who will be impacted by our solutions, we collect data about which solutions will work and which won’t. More often than not, we start with several mediocre ideas. It’s hard to have a great idea right from the start. But through iterative testing, we learn how to turn mediocre ideas into great ideas. And as an added bonus, because we’ve been testing with our constituents along the way, they are excited to embrace our solutions.

Managing Your Progress

It all sounds so neat and tidy. In reality, it doesn’t quite happen that way. I mentioned earlier that design is a messy, iterative process that often loops back on itself. There will be plenty of starts and stops. Plenty of ideas simply won’t work. Each prototype is a deductive test of how well you understand the opportunity space. When you learn that a solution isn’t quite right, you also learn something new about what your constituents need.

If you continue to map out the opportunity space, as you prototype your solutions, you’ll constantly be revising your best path to your desired outcome. You might start by focusing on one key moment in time only to find a more compelling opportunity elsewhere in the organization.

The key to a successful design approach is to embrace this uncertainty and enjoy the twists and turns along the way. As long as you are regularly co-creating with your constituents—listening for opportunities in their stories and testing your solutions with them—you’ll be well on your way to driving your desired outcome.

A final note for Product Talk readers: If you have leaders in your organization who want to learn more about how their role changes when they manage teams who adopt a continuous discovery mindset, refer them to our upcoming seminar.

The post Adopting a Continuous Discovery Mindset Across the Organization appeared first on Product Talk.