In AI, The Loudest Bottleneck Isn’t Always The Real One

AI founders are racing to secure GPUs — but the real constraint may be electricity.

By Neel Somani | edited by Micah Zimmerman | Mar 27, 2026

Opinions expressed by Entrepreneur contributors are their own.

Listen to this post

Key Takeaways

  • The loudest constraint often distracts founders from the real limiting factor.
  • AI success depends more on infrastructure logistics than headline technologies like GPUs.
  • Winning founders identify binding constraints and optimize around what actually blocks progress.

If you want a quick lesson in “the bottleneck isn’t always what’s loudest,” look at the California power market.

For years, politicians articulated the need for green energy production. That meant installing renewables like solar and wind, and incentivizing their production via subsidies. But after all of that effort, the evening price in California actually increased.

The reason is subtle, but the summary is that the real bottleneck turned out to be batteries: the ability to save that cheap daytime power and use it at night.

Now take that same mental model and apply it to AI.

Everyone is talking about GPUs. What’s advertised most broadly right now is that companies are building out their own data centers. But if you’re actually trying to build, you’ll keep running into the same question: How should AI companies try to get the power or the compute that they need?

This is where I think founders get misled. They copy whatever the market is obsessing over, and they mistake “what’s loud” for “what’s tight.” In optimization, there’s a more useful concept: binding constraints. What’s the thing that’s stopping you?

Here are three steps I use to find it.

1. Start with the objective function, then ask the only question that matters

I’m going to get a little bit technical here, because this is how I think about the real bottlenecks.

For any pricing problem, you start with an objective function — what’s the price that maximizes total welfare? Total welfare is the area between the supply curve and the demand curve. In other words, we want to give power to the people who want it the most, and we want to produce it for the lowest cost possible.

Then you ask the only question that matters: what are the constraints?

In a simplified production cost model for power, there are three main constraints:

  1. Supply has to equal demand.
  2. No transmission line can exceed its capacity.
  3. No generator produces more power than its capacity.

This doesn’t go into startup costs, shutdown costs, reliability and regional information, but it’s a useful lens because the model spits out a bunch of shadow prices.

Those shadow prices are the point. They tell you what’s actually tight.

Founders can do the same thing without building an optimization model.

Write down your objective function in one sentence, then list the constraints that can stop it. List the problems that can actually prevent the outcome.

If your objective is to “ship an AI product that people use,” your constraints might be that you can’t get power, compute, a data center built out fast enough to matter, the agreement structure to work or make the unit economics work.

The reason this step matters is simple: if you don’t know what’s tight, you’ll build the wrong thing.

2. Use logistics as your reality check

Logistics have a nice way of forcing you to deal with reality. It doesn’t care about narrative reasons. You either procure the supply that you need, or you don’t.

Operations researchers in big tech companies are used to this frame of thinking. They’ll literally write up a linear program or mixed integer optimization to represent how to organize their data center. This degree of modeling might be surprising to smaller companies.

And when there’s no way to satisfy all of your constraints, the optimization tells you that. Of course, there’s a cost to fixing it. You might have to relax some of your constraints. You can’t always get everything that you want.

That’s the whole point: constraints show up in the number. If you’re building in AI, stop asking, “What is everyone doing?” and start asking, “What would make my objective function move?” In your business, the objective might be deployment timelines, inference cost, latency or the ability to actually get compute online.

This is why I like the phrase “binding constraint.” It forces honesty. It forces you to say, “What is the thing that’s stopping me?”

3. Treat compute like a menu

I’ve seen people reach for a default answer because it’s what’s advertised most broadly: “build a data center.” But there might be other ways to get the compute you need.

When I think about AI companies, I translate the same question into founder language: How should AI companies try to get the power or the compute that they need?

One way is that you get your own GPUs. You find the power. Another option is a data center build-out, which also needs GPUs. You could also use inference providers that already exist, or rent data centers, or get them from somewhere like Crusoe.

It’s a fairly common build, buy or joint-venture consulting problem. And I like framing it this way because it forces you to stop pretending there’s one default path, especially when “what’s advertised most broadly” becomes the roadmap.

So take the menu of options and force a decision by asking one question for each option: is it the cheapest way to solve our binding constraint, and what is the new constraint it introduces? Owning GPUs solves one problem, and then you’re back to “find the power.”

A data center sounds direct, but if the timeline doesn’t work, then you have a new binding constraint. Renting or using inference providers can avoid one binding constraint, but you might trade it for another.

Stop copying the loud bottleneck

If you remember one thing, make it this: the limiter isn’t always what everyone is talking about.

The first step is always the same: define the objective function, then ask what the constraints are. The commodities markets are a good teacher because the name of the game has always been procurement. Supply has to equal demand, you can only ship so much, and you can only produce so much yourself. When something is tight, the price tells you. When something is loose, the price tells you that too.

AI infrastructure is heading into the same kind of reality. You can obsess over chips, but you still have to find the power. And once you see the problem through that lens, the roadmap gets a lot clearer: pick your option in the build-by-joint-venture spectrum, be honest about what’s actually tight, and plan around the constraint that’s real, not the loud one.

Key Takeaways

  • The loudest constraint often distracts founders from the real limiting factor.
  • AI success depends more on infrastructure logistics than headline technologies like GPUs.
  • Winning founders identify binding constraints and optimize around what actually blocks progress.

If you want a quick lesson in “the bottleneck isn’t always what’s loudest,” look at the California power market.

For years, politicians articulated the need for green energy production. That meant installing renewables like solar and wind, and incentivizing their production via subsidies. But after all of that effort, the evening price in California actually increased.

The reason is subtle, but the summary is that the real bottleneck turned out to be batteries: the ability to save that cheap daytime power and use it at night.

Join the Conversation
Leave a comment. Be kind. Critique ideas, not people.

Related Content