Validating key results is messy

Posted on June 6, 2022.

Key results, the “KR” in OKR, are assumptions. They are your best guess at which and how much behavior change in your users indicates you’ve delivered value. Once agreed upon, a team starts working to drive that behavior change. This should initially manifest as experimentation and learning but, realistically, often starts with software development and broad deployment of features. For those teams who do start with learning activities, the work focuses typically on validating feature hypotheses. But what if the behavior change we’re chasing is the wrong one? What if the amount of change is unrealistic? How do we validate our key result assumptions?

An assumption is an educated guess

To make the validation process easier we have to admit to ourselves that we are making guesses about how best to proceed with our product development. These aren’t (usually) blind guesses. They’re informed with historical data, customer insights, competitive analyses and market research — not to mention our own personal experiences. But if our feature experiments fail to bring about the behavior change we committed to in our key results, how can we be sure it’s the feature or the key result?

In many organizations, teams or leaders will dismiss the experiment itself as flawed first rather than admit their idea was wrong. But after a couple of experiments fail to bring the team closer to their desired key results, it’s time to start questioning our assumptions. The most common reaction is to try a variation on the feature followed by a more significant pivot in the choice of feature. Rarely will the team question the success metric they set. Yet, it is, like their feature choices, an assumption too. 

Talk to your customers

Stop me if you’ve heard this one. Ok, I know you’ve heard this one. There is nothing more insightful than talking to your customers. This is particularly true when the team is failing to find feature ideas that move the needle. Our experiments could have been textbook examples of landing page tests, feature fakes or even wizard of oz experiments but without the qualitative insight it’s impossible to know if we’re pursuing the right goals. 

Every learning activity must include some qualitative insight. It is imperative to understand not only what people are choosing to do with your feature ideas but why they’re doing it. Their motivations will often reveal that while we want them to behave a certain way, they have no intention of ever doing that. It’s at this point that our key results have to change. 

This is evidence-based course correction

The conversation with your stakeholders to change your key results can get messy. In all likelihood, someone is on the hook for the goals we committed to. It’s crucial that you approach this conversation with evidence. The risk is that your team will come across as not interested in trying too hard to move a difficult customer behavior forward. 

However, if you show up with evidence of experiments, customer testimonials and data that shows that any way you slice it your users will not significantly shift their behavior to achieve your desired key result, you stand a much better chance of success. After all this is the agility we seek as an organization. Sometimes we need to change what we’re working on. And sometimes we need to change what we’re working towards.