How to prioritize hypotheses for testing

Posted on July 26, 2021.

A few weeks back I shared a video on how to use the Lean UX canvas. One of the things I didn’t get into in detail was how to determine which hypotheses you should test, which you should just build and which you should throw away. Most of us don’t live in a reality where our entire budget can be used on learning activities. We actually have to ship software at some point so I thought I’d put together a short video that shows how to use the Hypothesis Prioritization Canvas to help sift through which hypotheses should actually get experiments and which should get the bin.

The video is short (~6 mins) and should hopefully help get that next experiment on the backlog. It should also help shift the conversation away from “we can’t possibly test all of this” to “here’s the next most important thing we need to test and learn.” As with all prioritization methods, this one is heavily assumption based. It asks your team to consider risk and value of each hypothesis BEFORE you build or even test them. That’s ok because if your learning activities are lightweight and your team works in short cycles, you won’t have to live with those assumptions for too long before you find out if you were wrong or right.

Take a look and let me know what you think in the comments.

Watch this short video on how to prioritize hypotheses for testing

Have you had a chance to check out my self-paced course on Objectives & Key Results? It’s a great way for you, your team and the whole organization to get up to speed quickly on this goal-setting method that underpins your organization’s agility.

Details and sign up information is available here.

Este curso también es disponible en español.

Transcript

Hi folks. In a previous video, we talked about how to use Lean UX Canvas. One of the things we didn’t cover in detail in that video was how to actually prioritize the hypothesis that you come up with in the Lean UX canvas for testing. You shouldn’t test everything. None of us live in a world where the budget that have is strictly for research or strictly for learning. We actually have to ship stuff. The budget, the time, the resources that we do have for learning should be focused on a specific set of hypothesis, not all of them. This video is a short one to help you determine which hypotheses to test, which ones to throw away, which ones to just build. Let’s take a look at that right now.

We talked about the Lean UX Canvas in the last video and really once you get to Step 6 in the canvas, you’ve written your hypotheses that’s very important. The next question in box number 7 says, “What’s the most important thing we need to learn first?” In order to answer that question, we need to determine which hypothesis we want to test first. We do that by working through a tool called the Hypothesis Prioritization Canvas. It’s another sort of 2×2 matrix to help you decide which hypothesis to test. That’s the framework that I’m going to go through today to help you determine which hypotheses to test. Let’s take a look at that.

The Hypothesis Prioritization Canvas is a 2×2 matrix. Nothing more fancy than that. The x-axis is an axis of risk. You’ve got low risk on the left side. You’ve got high risk on the right side. The y-axis is perceived value. You have high perceived value at the top, low perceived value at the bottom. Perceived value and not actual value is the language here because we don’t know. These are hypotheses. We haven’t built this stuff. We haven’t shipped it. We don’t know if these are valuable. We’re making a series of assumptions that this is going to help the customer be more successful. If the customer or the user is more successful, then this is more valuable to the business as well. 

Risk is going to be contextual to your hypothesis. For example, you might have design risk in your hypothesis. You’re redesigning an e-commerce platform and no one has ever shopped online this way. You’re designing a new way of working. You might have technical risk. This is a new technology that we have to integrate with a legacy system. That’s technically risky for us as well. You might have brand or market risk. We’ve been building products for a certain market segment for a long time and now we’re moving to another market segment. Risk is going to be contextual to each hypothesis that you put onto this 2×2 matrix.

You take that list of hypotheses that you generated in box number 6 and you start to map it onto this 2×2 matrix based on how valuable you think it is and how risky you think it is. The hypotheses that land in box number 1; the ones that are high perceived value and high risk, those are the ones that you test. Those are the ones that we’re actually going to work through, design experiments for, and spend the budget, the time, and the resources that we have to learn on because if we get these right, we stand to have a meaningful impact on the customer and on the organization. If we get it wrong, we also stand to have a meaningful impact on the customer and the organization but just in a negative way. These are the ones where we design experiments and MVPs. Those are the ones we want to test.

If you  map your hypotheses and they land in box number two where it says “Ship and Measure” – that’s high perceived value and low risk, we’re not going to test those hypotheses. We’re just going to build them and then we’re going to ship them and then we’re going to measure whether or not they lived up to our expectations. The hypothesis statement has success criteria in it as an outcome, as a key result, as a measure of user behavior. If a hypothesis is high value but low risk and we’re fairly confident that this is going to work, then we ship it and measure it. We just make sure that it lives up to our expectations.

If a hypothesis lands in box number 4 in high risk, low value, we are going to throw that hypothesis away. We’re not even going to work on it. We’re not going to test it. We’re not going to build it. If it’s high risk and we believe that it’s low value, why are we going to spend any cycles on it? Not just learning cycles but actual development cycles on it at all. Any hypothesis that lands in box number 4, we throw away.

The tricky box is box  number 3. Basically, anything below the risk line, we don’t test. But box number 3 gets a little tricky because we’re definitely not testing something that is low risk and low value. In most cases, we’re not even building it. We kind of throw it away. There will be situations where a hypothesis comes up with a feature that we need to participate in the space that we are playing in online. For example, that’s an e-commerce system and you need to take payments. You’ve got a payment system that needs to be built. You don’t need to test it because it’s a fairly standard process. It’s not going to differentiate you in the marketplace. You need to take payment. It’s not risky enough that’s worthy of any kind of test. We’re going to build it. We need it to play in the space but it’s just on of those things. It’s like table stakes. We have to have it to play in there. No testing. 

The only hypotheses that we test are the ones that land in box number 1. Those that are high perceived value and those that are high risk. As you think about which hypotheses to test, that’s a really good way to think about it. Use that 2×2 matrix, the Hypothesis Prioritization Canvas, and let me know how it goes. Good luck.