Earlier this year I held a webinar panel with Sense & Respond Press author Randy Silver about his new book, What do we do now?. On that panel was my former colleague Selena Hadzibabic who, today, is the vice president of product at The Knot Worldwide. During our discussion, Selena said something that caught my attention. “At The Knot, we optimize for decision making rather than process.”
This was interesting to me because so many of the companies that I’ve seen and worked with focus on the opposite. Their goal is to “deploy agile.” The training they offer their teams is designed to get the process rolled out and anything that falls outside of the training manuals is often shamed as, “not agile.” I decided to reach out to Selena in order to learn more about how she puts a focus on decision making with her teams.
You can listen to the entire interview here:
My first question was, “What does it mean to optimize for decision making?” Selena started by setting context, “It doesn’t mean we don’t have any process. Optimizing the process means tweaking ever more prescriptively the steps of the process in an attempt to make the process more accurate. We are more interested in how light we can make the process so that it still provides the necessary guidelines that allow a group of people to organize and do anything together but not so much that the process ends up impacting the quality of the decisions that we make.”
This was an important clarification because even the absence of process is a process (which reminded me of the great Canadian poet Geddy Lee when he said, “If you choose not to decide, you still have made a choice.”) Selena and her teams work to create the lightest possible process — one that doesn’t hinder their ability to adjust it — so that they still have structure but not rigidity. Instead, they focus on agility.
Doubling down on that, Selena added, “If you lock [your process] down too much, you are setting yourself up to work with incomplete information and to make the wrong decisions.” The point she was highlighting was that if there was an activity that fell outside of the prescribed process but that activity would provide insight the team needed to make a better decision, their process was flexible enough to accommodate that work.
There is uncertainty in every product decision, regardless of how tactical or strategic it is. Selena continued, “For each of [sic] option, you need to understand what is the impact of doing this. What is the good or bad it’s going to do to our customers? What is the good or bad it’s going to do to our business? What is it going to cost us to do this? What is it going to cost us to do this now? What is it going to cost us to maintain it later?” To get these answers the teams at The Knot follow a fairly consistent way of working for the majority of the time, but not all of it.
“I think it’s fair to say that 80% of the time, your core team is there. There’s a product person, a designer, a tech lead, probably a handful of the same engineers that run with you quarter after quarter. There is a perception of stability in that process because 80% of the time, you’re talking to 80% of the same people.” But when that consistent group of folks can’t provide the answer they need flexibility (agility?) comes into play. “It’s so important to remain flexible and to recognize sometimes you’ll be asking a new question and nobody on your team knows the answer and nobody is the expert. Suddenly, in that moment, your process needs to involve a completely new who, a completely new what, and a completely new not previously scheduled or planned time or possibly context. It might be a person that’s inside your company on a different team. It might be a person that’s not even inside your company. It might be doing user research and talking to customers or it might be hitting the road and talking to a couple of product leaders at another company that have solved a similar problem,” she added.
This is a compelling approach to building ways of working for your teams but experience tells me (and likely you too) that the naysayers aren’t far behind when this kind of flexibility is proposed. Usually it comes with the critique, “That’s not agile!” And, to some extent, they’re right. The changes you need to make to standard scrum and agile playbooks to make the best decisions will fall outside of what traditionally has been called “agile.” In these instances, Selena and team look at their process in the same way they look at their products, “The process itself is also a product. My initial response to that is you’re optimizing for output over the outcome. You’re married to the plan. A process is a meta-plan. It’s a plan for how you’re going to make a plan. It’s taking all of this agile thinking which is about staying open to learning more information and shoving it into a waterfall box.” Instead the teams focus on agility, rather than Agile. The focus on, “a collection of values.”
Des Traynor, cofounder of Intercom, illustrated this well when he described their process as, “Start with a problem. Think big. Start small. Learn fast. Deliver outcomes. That’s not really a process. That’s a set of values. I would argue that is in fact, the perfect process because it encapsulates what you want the team to accomplish in any phase of the planning process but you are not stressing, “What does it mean to think big? What is big enough? How big?” It doesn’t say, “In order to think big, I want you to get into a room with post-it notes and sharpies.” It doesn’t say, “In order to think big, I want you to go to our intra-company portal and submit to idea central wherein the ideas will be reviewed by the product team.””
These process guidelines serve as a framework for the teams as they execute the work but as you work your way up the organization, product managers, stakeholders and executives will stress the need for a process to help them prioritize. Selena reacted, “The concept I think about a lot is diminishing returns on making up accuracy. Let’s say we’re talking about potentially building something completely new. It’s a new problem. We’re really just starting to understand it and we’re saying things like, “As we were talking to a customer recently about something, something new came up. Then we started this survey about it. It turns out it looks like 2/3 of our customer base has this problem. [This has potential!]. [At the same time], we also found a point in the conversion funnel that became suboptimal. We knew exactly what happened and what we want to do. We’re going to get 12.5% lift here. Those are two different levels of confidence.” The reality is that most organizations, Selena’s included, aren’t going to build something based strictly on a survey. They’ll do more research to determine whether the investment is worth it. But compared to a clear 12.5% lift in a key metric, the latter is more compelling.
So, if organizations need more research before investing in new ideas, the processes they use should optimize for collecting evidence. The goal, of course, is to find a balance because there’s the risk of waiting too long to collect “all” the evidence before moving forward. This way of working further requires that the decision-makers in your organization admit that they don’t have all the answers — not as easy as it may sound. Selena points out that this way of working succeeds in high-trust environments. “If you get to a point where your decision is as good as it can be for the next three months, that’s really all you need.” Selena points out. These decisions, ultimately, are reversible which further reduces the risk you’re taking by making them.
It’s important to point out to teams trying to work this way that the evidence collection process, the research, is actual work. It takes time. Someone has to do it and this has to be accounted for in the planning process.
Our conversation then turned to consistency across teams. Optimizing for decision making may work well in smaller organizations but what about the enterprise. Can this scale? How do you maintain consistency across teams? Do you need to?
“I am starting to think that maybe consistency is overrated. The kind of consistency that I think matters is fuzzy. I think consistency of rigor is really interesting. What I’d hate to have is two teams where one of them has vetted and debated and looked at data and been honest with themselves about what they know, what they don’t know, where they’re putting finger in the air and saying, “This would just make sense but let’s write it in red font so we remember we made it up.” Then have a second team that blindly says, “This feels right.” You want some consistency of rigor. What if you have a team that’s starting something greenfield and a team that’s optimizing a very mature product? Should they really follow the same process? The greenfield team should spend most of their days talking to customers. The mature products team should have regular check-ins with the customers but actually spend a lot more time in those data analysis and looking for those micro-hacks where you’re tweaking the last half percent out of your conversion funnels. The day-to-day does look different. The planning process looks different.”
The evidence you’re collecting along the way is used to weave a story. You tell that story to your executives, your teams and your customers. The story has to make sense though. Force-fitting pieces of evidence together won’t convince a team to move forward with the ideas you’ve come up with as a product manager. Clearly pointing out where you have high levels of confidence and where there is greater uncertainty is far more beneficial for the team as they decide where to spend their time.
What does your process look like? How much of it spent collecting evidence? What do you prioritize? Let me know in the comments.