How to improve your process using product thinking

Posted on March 9, 2015.


Lean product development is focused on just that — product. All the first hand experience I have with these ideas has been tested and proven on products or services. Recently, I had the opportunity to see how well these ideas work in figuring out how to create a better process. I know what you’re thinking. Isn’t this the original purpose of Lean? To improve process? Yes. However, the opportunity with this financial services client was to take the ideas behind Lean and Lean Startup plus components of good (lean) UX work to improve the way the digital sausage is being made.

In this case, we worked with a client who was taking steps to move from a big, bloated waterfall approach to a more agile and iterative approach. Currently, they were building digital products in a way that was slow, laborious and involved many different groups and external vendors. They knew there had to be a better way but, with so many opportunities for improvement, they were at a loss on where to start and how much to invest in each step.

We decided to put lean thinking to the test on this process improvement project. In a typical lean product project we define the problems statement first. In this case, “make the digital product process better” was too broad of a statement. We had to hone in on a more tactical opportunity.

Understanding the current state

Our first goal was to create a shared understanding of the current state of things. To do so we brought 20 leaders from across the company (and across the country) together for 2 days of in-person workshopping. Using customer journey mapping techniques we had each group in the room visualize their perception of how digital products got developed. They identified not only the steps but the individuals involved and the pain points each were experiencing along the way.

Colleagues, especially those who have been working together for a long time as these teams had been, tend to assume everyone on the team believes the same things. This exercise served us well. It allowed the teams to see where their colleagues diverged from their own thinking and identify opportunities for improvement they weren’t seeing originally.

There were a lot of pain points. That generated a lot of opportunities, but far too many for the teams to attempt to fix all of them at once. To prioritize we used a technique called dot voting. Each team then shared their top 3 pain points which ended up consolidating down to three main focal points on which the room could concentrate:

  1. Lack of a clear strategic vision and business outcomes for projects
  2. Conflicting processes — fixed time/scope initiatives crammed into “agile” sprints coupled with big upfront design cycles
  3. A lack of user-centric thinking in the conception, development and deployment of products


Brainstorming potential solutions

Our second goal was to build hypotheses — tactical, testable statements that would allow the team to understand how their process should change, who it benefits, how it benefits those people and how this may ultimately impact the company. These hypotheses would serve as the roadmap for process improvement experiments and would clarify how the team would define success or failure of each attempt to improve.

To build hypotheses we needed to understand the teams’ assumptions. Here’s how we broke out the relevant ones:

1. Business Outcomes When working on products or services, business outcomes usually manifest as measures of customer behavior. For example, increasing the number of repeat visitors month over month. When working on process improvement these measures of success manifested as efficiency improvements. For example we had outcomes for “reducing the number of people required to deploy code” and “reduce the time between releases” to name a couple.

2. Personas In product development it’s good to have a clear sense of who you’re solving problems for and how it benefits them. Focusing on one or two initial personas usually provides enough consensus and shared understanding for the team to move forward. If the number of personas grows to 5 or 6 or more we’ve often advised that it may be worthwhile to reframe the differences between the personas or consider that you may be targeting too broadly for one product.

With these teams, working on improving their process, there was no explicit limit to the number of actors involved. In fact, as we learned very quickly, their current process required touchpoints from close to 20 different roles. This caught us off guard as the potential for either an exponentially large number of hypotheses or overly complex (and therefore less testable) hypotheses loomed from such a large number of personas. We realized we simply couldn’t make the exercise work with such a large number of “users” to accommodate.

To simplify we employed a 3-tier framework into which the teams had to fit all the various actors. It consisted of a tier for “leaders”, a tier for “managers” and a tier for “core team members.” While there were still outside players in the process, the teams agreed that most of the key members fit into one of these three buckets. Each tier was then referred to as “a persona.” The teams also did a good job not associating the specific names of individuals within those tiers and instead focused on generalizing each tier’s unique qualities.

3. User Outcomes Users seek out products to solve certain problems. They hire them to do a job. Inside large companies, employees seek out processes for much the same reason. It helps them achieve a goal — namely ship product. We took the 3-tier framework and asked the room to call out the needs and obstacles for each one.

  • What was important to each?
  • What helped them get their job done well?
  • What made them successful at what they do?

These were not quantifiable metrics per se but they served as the motivating factors for process participants to seek change and improvement.

4. Tactics Now that we had established why we wanted to change, who had to change and whattheir motivations were, we finally asked the team to come up with specific ways to make things better. The team had to come up with tactics.

We quickly realized that, unlike feature sets which many folks can quickly conceive, process improvement tactics aren’t as readily proposed. To set the stage for this last exercise, we provided the room with a laundry list of tactics and tools that have worked well for us in the past to help alleviate some of the main concerns they’d prioritized. Seeding this conversation perhaps strayed away from the purity of a collaborative brainstorm but, as the experts in the room, we knew that without some guidance the variety and ultimate efficacy of the solutions would be limited.

We took these four buckets of assumptions and composed our hypotheses. We stressed that each statement had to be tactical enough to be testable by a relatively small team so that we can build learning into the hypotheses, adjust based on the insight and then set the team off again on a slightly improved hypothesis.

As with product development, there are many risks to process improvement. Some are similar — for example, you may not find any buyers for your new ideas. Some, however, diverge — such as attempting to change process in a culture that fears failure. We asked the team to come up with the top 5 risks for their primary hypothesis to give them a sense of where they should be focusing their early efforts to test that hypothesis.

Finally, the teams generated a couple of experiments to begin testing their process improvement. These included email-based updates to managers and staffing a new project with a much smaller team than before.

In addition to the quantitative business outcomes they wanted to achieve teams also added in qualitative measures of success like “team morale” and “happiness.”

As we do in product development efforts, though, we had to strongly push the teams to be as “lean” as possible here. Their tendency, as part of a big organization, was to experiment big. Several teams began thinking of code they could write or systems they would have to purchase (think a/b testing platforms as an example) to prove out their experiments. Instead we urged the teams to work with systems they already have in place (e.g., using the obscure voting feature in Microsoft Outlook to check team morale) and limit their cost exposure as much as possible.

As much as these last steps were similar to the way we’ve applied lean to products and services we did have to add one last step that often is not required in product development. We assigned owners to every experiment. In product development, accountability comes from making progress on the product. Assigning specific individuals to running experiments is usually not necessary. With process improvement, there is less corporate pressure for immediate progress plus it is usually a secondary activity on top of product delivery and other responsibilities. Accountability had to be explicit. By standing up in front of their colleagues and quite literally signing up for each experiment the odds of them actually happening increased exponentially.

It seems clear now that, with a few modifications, the framework we’re using to improve the products and services we deliver can also be applied to the processes that create these products. We learned a great deal about how much this framework can stretch over the course of this exercise and have proven that, in most situations, focusing on the user, grounding your decisions in evidence and guiding your work with outcomes yields a successful result.