For data center developers, securing reliable power is the most difficult part of finding a good site for a new facility. Even when a developer finds a parcel that checks all the other boxes: land cost, permits, fiber access, water availability, etc., delays in getting a sufficient grid connection often kill the project.
That’s excruciatingly frustrating for developers who are trying to build the facilities and infrastructure to win the global AI race. And for many developers, the utility’s process to review their large load requests feels like a black box, so they’re left with minimal insight into if the utility will be able to serve a new data center. Across the country, sites with ready-to-go utility access are rapidly dwindling.
We’ve worked closely with utilities to analyze their local grid capacity and integrate flexible resources – on both the generation and demand sides. For the past 12 months, data center interconnection has been the hot topic. Here’s a quick recap of what we’ve learned about why it takes so long to connect data centers to the grid – and what we believe developers should do about it.
For decades, load growth in the U.S. was slow or non-existent. Now utilities are getting hit with more growth in a single year than they used to see in ten or twelve. Much of that new demand is coming from large, fast-moving data center customers.
Across the country, utilities are seeing a surge in requests from developers hoping to secure grid power for proposed facilities. McKinsey estimates that AI data center demand will grow 3.5x from 2025 to 2030, reaching 156 GW worldwide – with the United States as the fastest growing market.
The volume of requests has already far outpaced what most utilities have ever managed before and queues are ballooning as a result. In Texas, CenterPoint Energy reported a 700% increase in large load interconnection requests, growing from 1 GW to 8 GW between late 2023 and late 2024. Utilities like ComEd, PPL, and Oncor are reporting more GWs of data center applications than their historical maximum peak demand.
Similar dynamics are playing out across the country, where grid planners now regularly see tens of gigawatts of new large load requests in their active queues.
The biggest driver of delay is simple: our power system doesn’t have enough extra transmission capacity and generation to serve dozens of gigawatts of new, high-utilization demand 100% of the time. Data centers require round-the-clock power at levels that rival or exceed the needs of small cities, and building new transmission infrastructure and generation requires years of permitting, land acquisition, supply chain management, and construction.
Unfortunately the pace of building new transmission lines has fallen sharply in the past decade, even as the grid becomes more congested. Per a 2024 Grid Strategies report, an average of 1,700 miles of new high-voltage transmission was built annually in the U.S. from 2010 to 2014. That dropped to 925 miles from 2015 to 2019 and 350 miles per year from 2020 to 2023. Over the past two years, the U.S. only constructed 180 miles of high-voltage transmission.
It’s not just about a lack of power lines. New generation isn’t being built fast enough to serve load growth. In many regions, the requisite path to serving a large data center is to construct both new generation and new transmission. That takes time. From planning and permitting to procurement and construction, upgrades take 5 to 10 years (see the chart below).
Unless we’re able to significantly increase the pace of transmission and generation buildout, developers looking for locations with readily-available transmission and generation capacity will be searching for a needle in a haystack.
Even when there is sufficient capacity to serve new data centers with existing transmission infrastructure and available generation, the planning tools available to utilities and the interconnection processes themselves can introduce delays.
Inside utilities, planners and engineers are working diligently to connect new loads. But the tools available to planners were built for extending power lines to new neighborhoods or upgrading equipment as communities grow. They weren’t designed to analyze 50 new service requests of 100 MW each, all while new generation applications pile up. As a result, planners and engineers are overwhelmed; they’re stuck working to review new applications while simultaneously configuring new tools that are better equipped for the scale of this challenge.
And unlike generation interconnection, which has well-defined steps across most ISOs and utilities, the process for evaluating large loads is often much more ad hoc. This makes adopting the right tools much more difficult too. In fact, the majority of utilities and ISO/RTOs are still developing formal study procedures.
One recent success is the updated Large Load Interconnection Study (LLIS) in Texas, major changes to which were approved by the Public Utilities Commission of Texas (PUCT) in May 2025. This process clearly lays out the specific requirements for new applications and the steps that transmission operators and ERCOT must take when processing large load interconnection requests. While developers may find other gripes with Texas’ approach to data center grid connections, the establishment of clear rules and processes is a boon to developers and utilities alike.
In many other states, the lack of a consistent process makes it extremely difficult to evaluate projects quickly or fairly.
Without a consistent process and application requirements, speculative developers flood the queues with requests. It’s the same problem that utilities and ISOs ran into with renewable development a decade ago. When the only way for a renewable developer to secure a site was to submit ten interconnection applications and see which one got through first, they did exactly that. The result was a huge burden on planners evaluating requests and enormous backlogs. That’s what we’re seeing start to happen on the load side today.
As Astrid Atkinson, our CEO, noted in Utility Dive: "The speculative aspect is contributing to the queue getting backed up. Everyone is just guessing, because they can’t get good information from utilities about where the capacity is".
To reduce speculative requests and reign in the queues, more utilities and their commissions need to clearly define large load interconnection processes and support modernization of their planning tools to enable faster review of applications.
Many large load connections are delayed or denied because of how utilities evaluate risk. Grid planners are guided by “green books” that outline exactly how to evaluate impacts of changes on the grid. When it comes to looking at a data center interconnection request, these procedures focus on worst-case scenarios: peak system demand, an N-1 or N-2 contingency (such as the loss of a transmission line or major generator), and the data center operating at full load.
Under those assumptions, many otherwise feasible projects are flagged as too risky and forced to wait until enough infrastructure or generation is constructed to ensure reliability even in that worst case scenario. In PJM, for example, discussions are heating up regarding requiring large data centers to bring their own generation. That could effectively ensure that data centers can’t connect unless they are able to procure generation that has already navigated the ISO’s multi-year interconnection queue.
This planning approach focuses entirely on the supply side of the equation: generation to produce electricity and network infrastructure to transmit it from generator to consumer.
But as peer-reviewed research will attest, utilizing onsite generation or storage to respond to flexible grid capacity can speed up the path to reliable power for data centers and other large loads. Importantly, this approach to flexibility can come in many forms and doesn’t have any impact on core data center operations or server availability.
The fastest approach to powering new data centers, which is gaining momentum among developers and utilities, is to utilize hybrid power – a mix of grid power and on-site generation or battery storage. This allows the data center to power all of its serving loads with 99.995% reliability without getting stuck in multi-year waits for new infrastructure or generation to be constructed.
This approach requires the data center to use on-site generation or storage to reduce its demand on the grid during times when the grid is stressed – without any impact to server loads.
Instead of a voluntary, “demand response” model where the utility reduces its net load in exchange for financial incentives from the utility, the use of hybrid grid + on-site power becomes a regular part of the data center operating procedure – allowing the data center to connect years sooner in exchange for adapting its power strategy to accommodate changes in available grid power.
Importantly, this hybrid approach incorporates failsafes, monitoring, and all the modern technology needed to engender trust from the utility. The data center commits to a mix of “firm” grid power (available 24/7/365) and “flexible” grid power (available 90-95% of the time, with on-site generation filling the gaps) through a process known as a flexible interconnection.
The benefit for the data center is a much faster path to reliable power. And for utilities, it’s an approach that gets new loads connected and purchasing grid power sooner, all while providing an approach that utility planners and operators can trust.
If you’re a data center developer, it absolutely makes sense to continue looking for sites with firm, unconstrained grid capacity. But as those locations get harder and harder to come by, a hybrid approach to powering your data center may offer a faster path.
Utilities are already under pressure to move faster. They know that timelines are too long, but they need help to try something new. If you can come to the table with a compelling rationale for supporting a flexible interconnection, you’ll be more likely to find a rapid path to power.
That’s where we can help. At Camus, we work with developers and utilities to identify where hybrid approaches and flexible interconnections can accelerate the path to power, giving both sides the tools and data to move forward with confidence.
Are you a data center developer or utility looking to accelerate grid connections? Get in touch with our team.