Your cart is currently empty!
We Built Platforms to Empower People. Somewhere Along the Way, They Started Extracting From Us.

When the internet first took off, the story we all believed went something like this: technology would democratize opportunity. A kid with a laptop could build a business. A small team could compete with giants. Platforms would level the playing field.
And for a while, it felt true.
Tim Wu’s new book, The Age of Extraction, argues that this dream didn’t just stall — it reversed. The very platforms that started as open marketplaces slowly turned into machines optimized for extracting value from every participant: users, creators, sellers, workers. Power was consolidated. Choice was narrowed. And a handful of companies became the arbiters of economic opportunity.
What I found most compelling is that Wu isn’t just talking about Big Tech. He’s talking about a broader pattern: when systems scale without guardrails, extraction becomes the business model by default. It’s not some villain twirling his mustache — it’s the incentives, structure, and inertia we build into our companies that drives this.
If you work in product, this should feel uncomfortably familiar.
AI is accelerating this shift — unless we build something better
We’re in the early days of the AI wave, and the pattern Wu describes is already visible. The data that fuels these models? Extracted m mostly without consent. The attention patterns they optimize? Extracted from users. Even human creativity — something we long assumed was un-extractable — is becoming another resource feeding the machine.
The risk isn’t that AI becomes evil.
It’s that it becomes inevitable — a system that absorbs everything we do into an optimization engine we didn’t consciously design.
Wu’s argument is clear: if we don’t actively choose a different future, extraction will win by default.
But here’s the hopeful part: extraction isn’t destiny
This is the part that resonates for me.
Organizations have a choice. We can build AI-powered products that extract value — from user data, from employee labor, from customer behavior. Or we can build systems that amplify human capability, create equitable outcomes, and reinforce trust.
The difference isn’t the toolset.
It’s the culture.
It’s whether teams are allowed to experiment responsibly, to see real human behavior, to challenge harmful defaults, and to ship outcomes rather than outputs. It’s whether leaders understand that incentives matter more than slogans. It’s whether we build guardrails before we build features.
AI doesn’t automatically create an extractive economy. Poorly designed, misaligned, unchecked systems do.
A lesson for product teams: the future is not the technology — it’s the structure around it
One of Wu’s strongest arguments is that structure beats intention. You don’t fix extraction with better rhetoric. You fix it with:
- Clear constraints
- Transparent incentives
- Cross-functional collaboration
- Accountability to the humans who use your product
This is true for AI adoption, too. The teams that succeed won’t be the ones with the flashiest models. They’ll be the ones with the healthiest product cultures.
The question every team should ask right now
If you’re building with AI today, there’s one question from Wu’s framing that’s worth revisiting:
Are we creating value for people, or extracting value from them?
Most teams don’t ask this on purpose.
But they should.
Because the future isn’t being determined by the technology companies are deploying — it’s being determined by the cultures they’re building. And that, thankfully, is still something we control.
How’s your company culture set up for integrating AI? Take a look at this free AI cultural assessment model I put together .





Leave a Reply