I recently came across an interview with Jensen Huang that’s been bouncing around in my head. Not because of the usual hype about GPU performance or the next generation of chips, but because of something he said that sounds almost too mundane to be true.
He said: the biggest bottleneck for AI right now isn’t computing power. It’s plumbers.
Plumbers, electricians, construction crews—the people who build and maintain the physical infrastructure that AI needs to run. That was his point, and it’s one of those observations that feels obvious once you hear it, but almost nobody talks about.
Let me break down why this matters, and what it tells us about the real state of AI.
Note one: hardware is no longer the gating factor.
For the past decade, the narrative has been “we need more compute.” And it’s true—training GPT-4 or Gemini required clusters of tens of thousands of GPUs. But that problem has been largely solved at the chip level. NVIDIA’s H100 and B200 are absurdly powerful. The next generation will be even more so. The constraint has shifted from “can we make a fast enough chip” to “can we deliver enough electricity and cooling to run it.”
Huang estimates that a single modern AI data center can consume as much electricity as a small city. Building that infrastructure takes years of permits, supply chain coordination, and skilled labor. And that’s where the real friction is.
Note two: the “plumber” is a metaphor for a whole supply chain.
We tend to fetishize the cutting edge—the algorithm, the architecture, the breakthrough. But behind every AI model is a mountain of unsexy labor: concrete, steel, copper wiring, transformers, cooling towers, network fiber, and the people who install and maintain them. Huang’s point is that these are the actual rate-limiting steps. You can double your chip performance, but if the power grid can’t handle it, or the data center can’t be built fast enough, the performance gain is theoretical.
Note three: this reveals a deep cognitive bias in how we think about progress.
We love the magic—the code that writes itself, the model that generates art. But the plumbing is invisible, and therefore ignored. It’s the same bias that makes us think innovation is purely about ideas, not about execution. In reality, every great idea is constrained by how much concrete you can pour. Huang is essentially saying: stop dreaming about the perfect algorithm and start worrying about the water pipes.
Note four: the numbers are sobering.
According to a recent industry report, the average lead time for a large-scale data center is now around four to six years. Four to six years from breaking ground to operation. Meanwhile, new GPU architectures are released every two. That means we’re already in a situation where hardware outpaces the facilities to run it. And AI model sizes are growing faster than both. The result? A growing fraction of the world’s compute capacity sits idle because there isn’t enough power or cooling to run it 24/7.
Note five: talent is also a plumber problem.
When Huang talks about plumbers, he’s not just being literal. He’s referring to the whole ecosystem of skilled workers who make AI operational: data center technicians, electrical engineers, network architects, cooling system specialists. These are not the roles that get the media attention, but they are the ones that determine whether a 100,000-GPU cluster actually works. And they are in critically short supply. It’s not just about building the data center; it’s about running it.
Note six: this changes how we should evaluate AI companies.
Right now, investors and analysts focus on model performance benchmarks: can it write code? Can it reason? But if the bottleneck moves to infrastructure, then the companies that own the grid, the real estate, and the construction supply chain have an overlooked advantage. A startup with a breakthrough model still needs to find a data center to host it. And the big cloud providers (Amazon, Microsoft, Google) have already been locking up prime sites for years. This is a structural moat that has nothing to do with AI algorithms.
Note seven: the solution is not just building more—it’s building smarter.
Brick-by-brick expansion isn’t the only answer. There are efforts to increase energy efficiency, to move models to more efficient inference hardware, to optimize training with sparser architectures. But Huang’s point is that these optimizations, while valuable, are incremental compared to the infrastructure gap. We may need to rethink the design of data centers entirely—perhaps moving computing to where the energy is, rather than the other way around. Imagine clusters in the Arctic near hydroelectric dams, or in the desert where solar is plentiful but water cooling is a problem. That’s not science fiction; it’s the kind of engineering challenge that plumbers will solve.
Note eight: the lesson for the rest of us.
There’s a tendency in any fast-moving field to focus on the glamorous, the visible, the breakthrough. But the real bottlenecks are almost always the boring, the mundane, the overlooked. Huang’s interview is a reminder that the most important work often happens in the background. For anyone building in AI, or investing in it, it’s worth asking: where is the plumbing in your own pipeline? The answer might tell you more about your real constraints than any benchmark ever could.
So the next time someone talks about AI scaling laws or new architectures, think of the plumber. Because without them, the future is just hot air and idle chips.