The Next Data Center Crisis Will Be About Power, Not AI
- Pegah Zaree

- Feb 10
- 4 min read

Why grid infrastructure is becoming the real constraint on data center growth
Published: February 11, 2026
Author: Pegah Zaree
For a long time, data centers have been discussed as digital assets. The dominant narrative has focused on efficiency: better software, faster chips, improved cooling, lower power usage effectiveness. That framing made sense when gains from optimization were large and compounding.
Today, that framing is starting to break down.
As compute intensity increases across AI, high-performance workloads, and always-on digital services, data centers behave less like abstract digital platforms and more like physical infrastructure. Their growth is increasingly governed by thermodynamics, energy delivery, materials, and geography - optimization still matters. But it no longer defines the boundary.
Modern data centers are tightly engineered environments where power delivery, cooling, redundancy, and control systems operate as an integrated whole. At higher rack densities, heat is generated faster and more locally than traditional air handling systems can remove. Once rack densities rise into the 30-40 kilowatt range, airflow volumes and fan power scale aggressively, making air cooling difficult to justify both physically and economically. At that point, liquid-based cooling becomes the default rather than an optimization.
Water is used because of physics, not preference. Its thermal properties make it far more effective than air for removing concentrated heat loads. As a result, chilled water loops, rear-door heat exchangers, and direct-to-chip cooling are becoming standard in modern facilities. In the most extreme configurations, immersion cooling is emerging.
This shift improves thermal efficiency, but it introduces a parallel constraint. Liquid cooling systems require pumps, heat exchangers, filtration, and control layers that must operate continuously and redundantly. Water chemistry, flow stability, and system reliability become operational factors rather than sustainability metrics. What initially appears as a cooling decision is, in reality, a site-level systems decision.
For years, improvements in software efficiency and hardware performance per watt absorbed much of the growth in compute demand. Large gains came early. Over time, those gains have become incremental. Software cannot be endlessly simplified. Hardware improvements slow as physical limits assert themselves.
"When efficiency gains plateau, growth stops being absorbed by optimization and starts showing up directly as energy demand."
Meanwhile, workloads continue to scale.
When efficiency gains plateau, growth stops being absorbed by optimization and starts showing up directly as energy demand. Every additional unit of compute requires additional power. At scale, this changes the nature of the problem. At the upper end, a single large data center campus can approach one gigawatt of power demand, comparable to the output of a nuclear power plant. At that level, marginal efficiency improvements no longer change the equation. Infrastructure does.
Power grids were not designed for clusters of single users drawing hundreds of megawatts behind a single connection point. As data centers scale, grid connection and distribution become binding constraints. Even when capital and demand are present, timelines stretch. Substations, transformers, and transmission upgrades take years to permit and build. In many regions, interconnection queues and equipment lead times have overtaken land or construction capacity as the primary bottleneck.
Distribution matters as much as generation. Transformers are required not only to step voltage levels, but to manage reliability and fault tolerance at scale. Their availability, manufacturing capacity, and installation constraints increasingly shape what can be built and where. Energy demand does not exist in isolation. It must move through infrastructure that is finite, slow to expand, and regionally specific.

At this point, data center growth begins to be shaped by a small number of structural realities:
Power must be delivered continuously, not just generated
Grid capacity and interconnection timelines matter as much as capital
Storage adds flexibility, but does not create energy
Materials like copper increasingly constrain speed and scale
Geography and time-of-day pricing shape feasibility, not just cost
Battery energy storage systems are becoming an important part of the data center energy stack. They provide short-term resilience, smooth peaks, and help balance variable generation. They reduce stress on the grid during transient events. But storage does not create energy.
At best, it shifts demand across time. At worst, it adds another layer of capital, materials, and operational complexity. Storage is a flexibility tool, not a solution to absolute demand.
As electricity flows increase, material constraints surface. Copper, in particular, becomes a limiting input. It is required for transmission lines, transformers, substations, and internal power distribution. Electrification at scale is not only an energy problem, it is a materials problem.
When data centers, renewables, storage systems, and broader electrification efforts compete for the same inputs, supply chains tighten. Costs rise. Timelines extend. Constraints compound. These dynamics are often overlooked in high-level discussions, but they matter deeply at the infrastructure level.
"As data centers scale, geography becomes a constraint long before technology does."
Not all regions face the same constraints. Energy mixes differ. Grid resilience differs. Permitting frameworks differ. So does pricing. Equally important is when energy is consumed. Data centers are not static loads.

Activity varies across the day, often peaking during business hours and early evening. In some regions, the availability and cost of power at specific hours can be as decisive as total installed capacity. Scaling compute is therefore not just about how much power exists, but when and where it can be delivered reliably.
Data center growth has entered a phase where optimization alone cannot carry it forward. From this point on, progress is constrained by physical infrastructure: cooling systems, water availability, grids, transformers, storage, materials, regional energy portfolios, and economics.
This does not make the challenge unsolvable. But it does change its nature.
What was once primarily a software and hardware problem has become an infrastructure coordination problem. Energy, water, stability, and cost set the boundaries long before roadmaps or visions do. Understanding those boundaries is no longer optional. It is foundational to scaling compute in the real world.






Comments