The intelligence / impact dichotomy
We have normalized what is incredible
One of the evolutionary superpowers of humans is that we are incredibly adaptive.
The flip side of this is that we can take things for granted, because we normalize them.
Every day, we turn on the bathroom lights, brush our teeth and step in the shower. It's pretty mundane.
But this is just one example how our lives are built on the marvels of modern electricity, medicine and plumbing. These things are ordinary to us, but would be truly astonishing to our ancestors.
AI is fast becoming one of those things. Today, we can share a photo with a machine and ask it to write us a poem about it. Within moments, you'll have something better than most humans could do in an hour.
It's more than entertainment. Today, frontier AI models can solve complex coding problems at a world-class level, contribute at the frontier of scientific research, and substantially outperform medical students in assessments.
Just a few years ago, this would have been unthinkable for most of us. Now, with strong open-source models performing comparably, intelligence is becoming a simple utility.
We need more than model intelligence
For ambitious companies doing important innovation, the bottleneck used to be talent, skills and work ethic.
Now, anyone can work with world-class intelligence that produces output at extraordinary speed and negligible cost compared to equivalent human labor. As a rough order of magnitude: frontier models can generate a well-structured 2,000-word analysis in under a minute for a few cents — work that might take a skilled consultant several hours at several hundred dollars. The precise ratio varies by task, but across coding, writing and analysis benchmarks, the order of magnitude is consistent: roughly 100x faster and 100x cheaper.
The companies building the foundational model layer are scooping up capital at an ever-increasing rate - investing tens and hundreds of billions into making these models even more powerful, fast and cheap.
That work is important - but we should also remember, intelligence is not the end goal.
We already have a technology with a theoretical 100x multiplier potential in both velocity and efficiency. (Crudely, we could even consider that as an overall 10,000x multiplier on productivity - but for this essay we will consider it as a 'mere' 100x.)
This could be a civilization-scale delta which is already in our hands.
Why are we not seeing a civilization-scale impact?
The reality outside of the tech bubble
Those of us in the tech bubble really "feel the AI".
Most people don't feel it in the same way.
- 80% of executives say they've seen no impact on employment or productivity in the past 3 years from AI
- On average, workers think it saves less than 2% of their time.
For those of us immersed in tech, who've "felt the AI" and are ambitious for the world, this is a problem worthy of obsession. What's stopping this 100x technology from having impact anywhere near the same size?
It's tempting to say that this is simply an adoption gap. People need to use AI more, or use it better.
If we challenge ourselves to think bigger and bolder, I think we can find a bottleneck that's much deeper and more profound than simply 'adoption'.
The bottleneck is bigger than adoption
A thought experiment in productivity hemorrhage
To understand why the bottleneck is not adoption, we'll run a thought experiment.
Let's imagine a fictitious Fortune 500 scale enterprise, 'Dunane & MacMillan' (DMM). DMM is a global provider of knowledge work services (e.g. software development, legal services or management consulting) - exactly the type of work which today's frontier models already excel at.
In this fictitious, optimistic and somewhat magical scenario, DMM has a clear strategy set by the CEO to go 'all in' on AI-driven operations and productivity (or 'reinvention', perhaps) - which we will assume is embraced enthusiastically by the whole workforce, who have all woken up as talented, conscientious and AI-proficient individuals.
How much more productive would that company be?
As a benchmark, recent history has the Fortune 500 growing revenue at ~5% annually, with strong performers hitting 10-15% (i.e. 1.10x-1.15x).
In this thought experiment, we don't have actual data on what would happen - but we can sketch an illustrative scenario to see where reasonable assumptions lead.
- It almost certainly wouldn't be a 100x revenue increase, or anything close to that.
- It could conceivably be a 10x of some sort... but a 10x on what?
- Perhaps the most conceivable 10x is "the company would grow 10x more quickly than it has" rather than "the company's absolute revenue would increase 10x"
- On current benchmarks, this would optimistically be 100-150% revenue growth - equivalent only to a 2.0-2.5x overall growth on headline revenue
- A 10x increase on headline revenue would be a 100x increase on growth rate, which seems deeply improbable
As such, AI - which we've taken to have 100x potential on productivity - might, even under generous assumptions, result in a mere 2.0-2.5x productivity increase (or less!) at the enterprise-scale.
The precise numbers matter less than the direction: even in a wildly optimistic scenario, we're losing almost all of our AI multiplier. This thought experiment shows that we have an AI productivity hemorrhage.
This is what we call the Post-Intelligence Bottleneck. Extrapolating (crudely) out to a national or global economy, we can imagine that - even with otherworldly adoption assumptions - our productivity gains are negligible relative to 'civilization-scale impact'.
So, there's something huge that we're missing out on - and it's much more than an intellectual puzzle. This is a real loss of outcomes for the world.
History can teach us about solving this bottleneck
This thought experiment shows that there's a Post-Intelligence Bottleneck that isn't simply 'adoption'.
We might try to explain it all with something like 'organizational drag' - factors like hierarchy, bureaucracy and complacency. We might be tempted to accept as brute fact that these are going to kill most of the potential productivity gains from AI.
I think we can be more optimistic than this.
In particular, I claim that the Post-Intelligent Bottleneck is an infrastructure gap, to a significant degree.
It turns out that this is entirely consistent with economic history - transformational technologies often require complementary infrastructure for society to yield the promised benefits.
In the next sections, we'll go deeper into this - first, studying the related economic history; and second, unpacking the Post-Intelligence Bottleneck as an infrastructure gap.