I was speaking to a group of young product managers last week about objectives and key results. I assumed during my presentation that much of their work – regardless of size of company – is currently focused on implementing AI initiatives. I suspect this wasn’t a risky assumption. It dawned on me that there couldn’t be a better time for us to have a conversation about human-centric ways of measuring “value.” With the AI gold-rush on we are, once again, on the precipice of unleashing a tidal wave of crappy products, services and experiences on an unsuspecting customer base. Here are a few quick thoughts I shared with the group on how to measure the value of their AI initiatives.
We’ve been here before
We are, once again, in a shiny object frenzy. AI is a game-changing technology. There’s no doubt about that. And every organization has to have “some AI” or they’re not being competitive. We’ve seen this before with the advent of the web initially. Then as the web became more interactive we implemented Flash, Silverlight and Ajax to name a few technologies of yesteryear. Next came the mobile web and then apps. We had to have some of those shiny objects too. Yet, in the chase to stay current so much waste was created. The goal with those innovations, as it is now with AI, is to ship something with the new technology. This is a box-checking exercise. It is not a way to deliver actual customer value. In fact, it often ends up obscuring the actual value your company provides.
Avoiding shiny object syndrome
So how do we avoid repeating the mistakes of years past and deploy AI initiatives that actually make customers and our business more successful? As always, we go back to first principles. What is the problem we solve for our customers? Let’s start there. Imagine your company sells a desktop and mobile email client. You’re in the communication business. Your goal is to ensure consistent, efficient, simple and prioritized messaging for the users of your software. Their goal is not to miss any important emails, respond quickly and effectively with minimal effort.
I like email because it’s one of those “ancient” technologies whose death has been predicted for decades and, well, here we are still using it all day every day. This presents our email client company with tremendous opportunities. Email has largely been the same for 40+ years. How might we make it better especially with the opportunity to use AI this time around?
Too many companies have already rushed to add AI features to their email clients. What have we ended up with? In the app that I use I have an AI-driven composition tool which produces some of the worst, and obviously artificial, email copy I’ve ever read. On top of that there’s a translation tool. Since I get emails in various languages this one comes in handy occasionally.
The wrong question to start with is, “How can we add AI to the product?” Instead each team should consider what the core use cases are for the target audience they cater to and how they might improve them. This is classic outcomes over output. Our goal isn’t to implement AI. Our goal is to make customers more successful.
Who does what by how much?
All of this brings us back to this fundamental value equation – Who? Does what? By how much? If we successfully integrate AI into our product, which of our users will change what behavior and how much will it change? This is how we start to measure the value of our AI initiatives. Take the email client example again. Instead of trying to write emails for me (please, stop with this) study my usage habits. See who I email often and what I say to them. See how I respond to certain types of inquiries. Look for patterns in my responses.
For example, I constantly communicate with my two business partners Josh Seiden and Aida Perez. Our conversations are short and follow consistent patterns. A way to make my email usage more efficient is to prompt me, when I start the client, with a notice, “Morning Jeff, you’ve got two emails from Josh and one from Aida. All 3 seem to request a meeting, do you want me to respond back to them with your availability over the next two days?” (the email client also has a calendar component). How amazing would that be? It would immediately save me writing 2-5 emails and get the meeting scheduled.
Here’s another example, “Morning Jeff, I’ve noticed every time you get a blog collaboration or link sponsorship email you block the sender and don’t respond, should I just block all of those types of emails going forward?” OMG. Yes, you should.
I hope you see where I’m going with this. Making me more efficient through pattern recognition and predictive actions is valuable to me. It’s a good use of AI and it makes me a more loyal user of the tool. Why? Because it solves a real problem for me. And, because AI is both local and global, it can review my local behavior against the (hopefully anonymized) behavior of other users of the same email client and start to build a set of recommendations for repetitive situations.
Solve problems. Change behavior. Win the market.
The definition of success here isn’t, “deploy the AI prompt” but rather, Jeff spends 90% less time dismissing emails he never responds to anyway. “Who does what by how much?” is how you measure the success of your AI implementation. And the only way to set those goals is to understand the current usage of your product, where it’s not meeting user expectations, and thinking of innovative ways this new technology can make those expectations come true.
If you’d like to learn more about testing, validating and measuring the success of your AI initiative, join our upcoming half-day webinar on Lean AI. We’re collaborating with Eric Ries and Lean Startup Co to teach how to de-risk your AI initiative and ensure it’s solving real-world customer problems. Learn more here.





