Your cart is currently empty!
Learning > Winning: Rethinking Experiment Failure

I was chatting with one of our Certified Training Partners, Jim Sammons, earlier this week. He shared a story with me about a team he worked with that was practicing product discovery work but because their experiments kept failing they were getting dejected about the process. In the end they abandoned doing the discovery work because of this continuous failure. We were discussing what to do in this situation and, I have to admit, I got a bit fired up. I didn’t get angry, mind you, not at Jim nor the situation, but more so at the idea that failure was a bad thing. Here is a quick tl;dr of my response to this situation and what to do if you find yourself in a similar place.
Learning is your super power
Running lightweight experiments to test your assumptions is one of the smartest and easiest ways for teams to ensure they’re working on things that will actually make a difference to their customers and the business. But, to be clear, running experiments doesn’t mean you’re going to be right. In fact, and especially with software product development, the odds are you will be wrong to some extent. This is exactly what teams should be striving for. What work can we stop doing? What work should be done differently? What work aren’t we doing that we should be doing? Every time we get a piece of data like that we can adjust our trajectory to be slightly more accurate and, ultimately, valuable to our customers.
Teams who do product discovery work regularly have a super power. They are continuously learning. This is a remarkable advantage over teams that don’t do this. It’s not a failure when your experiment doesn’t yield the results you expected. It’s a learning opportunity and a way to get smarter about your work. Look, I get it, no one comes in to work every day excited about failing. However, we should all look forward to learning something new every day. It might be a semantics issue but it’s a powerful one.
If you’re not learning, something is wrong
Which brings me to the core issue in Jim’s story. The team he was working with discarded product discovery work because “they kept failing.” This is a red flag for me. If a team is running experiments and every single one is failing, something is wrong with the way they’re working. One possibility is that they’re designing and executing their experiments poorly. Another could be that they’re testing the wrong things. Or a third option could be that they aren’t analyzing their results properly. There are others but, regardless of the cause, there is a coaching opportunity here.
If you are leading, work with, or coaching a team like this and you realize that every experiment is failing, it’s time for a pause. The team leader or coach needs to run a retrospective on what’s been happening. What experiments did we run? How did we run them? What kind of data came back? What did we do with that data? What could we be doing differently?
Sometimes this will require a seasoned product discovery coach to truly analyze the root causes here. In other situations the team may require training in modern discovery methods. Perhaps, there is something else keeping the team from learning continuously like a personnel or management issue which would require leadership to step in. With all of these scenarios, it’s not the product discovery process itself that is hurting team morale or productivity. It’s the way it’s being deployed and integrated into the rest of their process.
Learning may frustrate but it’s never a bad thing
I love my ideas. I think they’re awesome. I bet you do too. When I find out that one of my ideas is wrong I get frustrated at first. That’s natural. After a short while though I’m excited to get back to it and not waste my time (or anybody else’s) going down a path that isn’t going to achieve our goals. This is the power of learning. If you’re seeing the next step due to your product discovery work, you’re doing it right. If not, take a deeper look at what’s happening. It’s like a process or execution issue rather than a failing of the methodology itself.




