The Truth Curve

Posted on September 12, 2022.

Experimentation has always been a challenge for teams. Even teams who have experience writing hypotheses and designing experiments struggle to spend the “right” amount of effort on their learning. Spend too little and you risk wasting a cycle and not learning much. Spend too much and you may miss your learning window or end up investing so much in the idea that course correction becomes a near impossibility.

One of the tools that I’ve been using and teaching for years to help determine how to spend an appropriate amount of effort on an experiment is The Truth Curve. Originally framed by Giff Constable in his book, Talking to Humans, the Truth Curve is a simple visualization designed to force the conversation to reconcile “how much do we know?” and “what’s the next step?”. Since Giff first published his book, The Truth Curve has taken on a life of its own as it has been used, interpreted and reshared by several other folks in the product management and agile communities.

In this month’s newsletter I want to take a look at The Truth Curve to show how it helps teams determine when and where to spend as well as when and where to stop spending. I also want to give you two prompts to help you use it effectively at your next iteration planning meeting. 

First and foremost, behold The Truth Curve:

The Truth Curve contains two axes. The Y axis represents the amount of evidence you have to justify working on your current hypothesis. The higher up you go, the more confident you are that the hypothesis is valid. The lower you go on the Y axis, the less confidence the team has that this hypothesis will work as originally conceived. 

The X axis represents an increasing level of effort and scope dedicated to the experiment you’re currently running to test the same hypothesis. The further to the right you are on the X axis, the more time, effort and money is being spent on the idea. 

The green curved arrow (aka The Truth Curve) represents the path teams should follow as they test their hypotheses, collect evidence and learn where it’s valid and where a course correction is needed. If your work falls above the green line (lots of evidence, lower investment) your team is at risk of analysis paralysis. You’re testing and learning but not responding to what you’re learning in an aggressive enough fashion. 

If your work falls below the green line (lack of sufficient evidence, heavy investment) your team is taking unnecessary risks by building out a solution it doesn’t yet have the evidence to justify. 

If you’re starting out on a brand new hypothesis without any significant previous evidence you are in the pink park of the diagram, aka The Land of Wishful Thinking. You’re hopeful that your hypothesis is right but the lack of evidence from the market places you on the bottom left corner of the curve. Here, your goal is to learn as cheaply and quickly as possible if there’s a “there there” in your hypothesis. The typical experiments you run at this point will be customer interviews, surveys, or paper prototypes. As you begin to collect more positive evidence from lightweight experiments, your level of confidence goes up. With that increased confidence comes the justification for greater effort on the next experiment. It might end up being a clickable prototype or a Wizard of Oz experiment. 

Each time you run an experiment and get positive feedback, the team asks the same two questions:

  1. What’s the most important thing we need to learn next?
  2. What’s the least amount of work we need to do to learn it? 

At some point and with enough positive feedback from the market the only way to learn the next most important thing is to build shippable software. However, this is also the most expensive and riskiest way to learn. We should be sufficiently high on the Y axis before committing to writing code. 

As you learn, the questions change

One fascinating aspect of the Truth Curve is that as you move up and to the right, the questions you’re asking about your hypothesis change. In the early (cheap/low effort/low confidence) days of your idea you’re trying to find problem/solution fit. In other words, does the thing we’re building solve a real problem for a real customer in a meaningful way. 

If the evidence comes back positive and you continue to scale up and to the right, the team starts to look for product/market fit. At this stage you’re testing whether or not this is a sustainable business for you to be in and is it worth it for you to scale this functionality broadly. 

The question the team is working on shifts from “Should we build it?” to “Can we build a sustainable business with this idea?” (or can we grow our existing business in a sustainable way?). 

If the evidence is negative, stop and correct your course

One thing to keep in mind as you start to use this model for determining your learning efforts. If, at any point, you start to get negative feedback about your hypothesis. The data you’re collecting points to flaws in your assumptions or a change in your target audience’s needs or behaviors, you have to stop. You have to reassess whether this is still an idea worth pursuing and, if so, how you’ll pivot to keep it relevant in the new market conditions you’ve uncovered. In many cases, this may be a signal to kill this idea and move on to the next hypothesis in your backlog. 

Don’t ignore the data you get from your experiments. Map them to the Truth Curve and only then decide if and how you’ll move forward with your next learning effort.