What it is

Investment research piece arguing energy is the primary constraint on AI scaling. Documents data center consumption at 415 TWh in 2024, projected to triple by 2030, with AI workloads rising from 15% to 35–55% of load. Discusses SMRs as 50–300 MW supply solution for hyperscale campuses facing grid interconnection bottlenecks.

Why it matters

Owner-operators planning AI facilities face 200+ MW loads (DGX H100 clusters with 1.25 PUE) that exceed typical grid interconnection capacity. The “stagnation of grid interconnection capacity” combined with compute-saturated GPU workloads (700W TDP H100s running continuous max utilization) forces rethinking of site selection, phased build strategies, and alternative generation like SMRs to overcome supply timeline constraints.

Evidence from source:

  • 20,000 GPU clusters draw ~200 MW IT load alone; at 1.25 PUE total facility draw reaches 250 MW
  • Global data center consumption 415 TWh in 2024, IEA projects 1,250–1,500 TWh by 2030; AI rising from 15% to 35–55% of load
  • Grid interconnection capacity stagnation cited as structural constraint; SMRs proposed at 50–300 MW scale matched to hyperscale campuses

Open questions

  • What distribution architecture changes are required inside facilities designed for 200+ MW IT loads with 1.25 PUE targets?
  • How do SMR deployment timelines compare to utility interconnection queues for 50–300 MW hyperscale sites?