Something unusual is happening in American electricity markets. After a decade of near-flat demand, consumption is surging. The U.S. Energy Information Administration forecasts the strongest four-year growth in electricity demand since 2000, driven overwhelmingly by data centres and industrial electrification. From 2020 through 2026, consumption is expected to grow at an average of 1.7 percent per year, with the commercial and industrial sectors growing at 2.6 and 2.1 percent respectively. These are numbers the grid has not had to accommodate in a generation.
At the same time, the infrastructure required to deliver that power is not keeping pace. Grid Strategies reports that annual high-voltage transmission construction in the United States has collapsed from an average of 1,700 miles per year between 2010 and 2014 to roughly 350 miles per year between 2020 and 2023. The Department of Energy estimates the country needs approximately 5,000 miles of new high-capacity transmission annually. In 2024, fewer than 900 miles were built. Meanwhile, Lawrence Berkeley National Laboratory's interconnection data shows that projects built in 2023 took nearly five years from interconnection request to commercial operation, compared to under two years for projects completed in the late 2000s. Nearly 2,300 gigawatts of generation and storage capacity sit in interconnection queues, waiting.
This is the macro picture that has led a growing number of energy analysts to conclude that distributed energy resources — rooftop solar, behind-the-meter batteries, flexible loads, vehicle-to-grid systems — are no longer a nice-to-have but a structural necessity. The logic is straightforward: if you cannot build transmission fast enough to move centralised power to where demand is growing, you must generate and store energy closer to the point of consumption. And the economics for doing so have never been stronger.
The Hardware Revolution That Already Happened
The cost declines in distributed energy hardware over the past several years have been extraordinary, even by the standards of an industry accustomed to learning curves. BloombergNEF’s 2025 battery price survey puts global lithium-ion pack prices at $108 per kilowatt-hour, down 8 percent from 2024, with stationary storage packs dropping to $70 per kilowatt-hour — 45 percent lower than the prior year and the lowest-priced segment for the first time. These numbers would have seemed fanciful five years ago. They are the product of massive manufacturing overcapacity, the shift to lower-cost lithium iron phosphate chemistries, and the relentless logic of industrial learning curves that has characterised battery manufacturing since its inception.
Solar module costs have followed a similar trajectory. The hardware itself — the panels, the inverters, the racking — now represents roughly 35 percent of a residential solar installation’s total cost, according to NREL. The remaining 65 percent is soft costs: permitting, design, customer acquisition, installation labour, interconnection, and financing overhead. This ratio is important, and we will return to it, because it tells you something fundamental about where the remaining barriers to scale actually sit.
At the same time, electricity prices are climbing. The EIA projects prices will continue rising through 2026, with commercial rates up nearly 8 percent year-over-year in late 2025. Utilities requested a record $31 billion in rate increases in 2025, more than double the $15 billion sought in 2024. Rising retail rates widen the spread between what customers pay for grid electricity and what they could pay by generating and storing their own, making the simple economics of behind-the-meter assets increasingly compelling for commercial and industrial facilities.
The result is a convergence that distributed energy advocates have been predicting for years: hardware costs falling to the point where distributed assets earn more per megawatt-hour than their utility-scale equivalents, at a moment when the grid desperately needs the capacity. Virtual power plants are beginning to prove this at meaningful scale. Sunrun’s CalReady programme now aggregates 75,000 home batteries capable of delivering 250 megawatts on average during peak events, quadrupling in size from 2024. Wood Mackenzie reports total U.S. VPP capacity reached 37.5 gigawatts in 2025, with residential battery enrolments alone growing 153 percent year-over-year. These are no longer pilot programmes. They are grid-scale resources.
So Why Is Deployment Still Too Slow?
Given all of this — falling costs, rising rates, grid necessity, proven technology — you might expect distributed energy to be scaling at a pace commensurate with the opportunity. It is not. The SEIA/Wood Mackenzie Solar Market Insight reports show that the entire U.S. commercial solar segment installed just over 1 gigawatt in the first half of 2025. That is progress, but it is nowhere near the trajectory required to materially address the demand growth and transmission constraints described above. The distributed energy industry is growing, but it is growing as an industry that adds capacity in small increments — hundreds of sites per developer per year, not thousands.
The standard explanation focuses on policy and permitting: interconnection backlogs, inconsistent net metering rules, slow municipal permitting processes. These are real constraints. But they are not the primary reason that institutional capital has largely stayed away from distributed energy portfolios. The deeper problem is structural, and it has to do with the economics of information.
Consider what it takes to deploy a 100-megawatt utility-scale solar farm. A single development team conducts a single site assessment. They perform one environmental review, one interconnection study, one set of geotechnical surveys. They model one production profile against one offtake agreement. They negotiate one set of contracts, arrange one financing package, and build one project. The transaction costs — all the analytical, legal, and financial work required to turn a development opportunity into an operating asset — are spread across 100 megawatts. On a per-megawatt basis, they are manageable.
Now consider the same 100 megawatts deployed across 500 commercial rooftops. Each site has a different load profile. Each sits under a different utility tariff, often with different rate schedules, demand charges, time-of-use windows, and export compensation rules. Each building has a different credit profile and a different owner with different contractual preferences. Each requires its own site assessment, its own engineering design, its own permit application, its own interconnection agreement. The analytical work required to determine whether a given site is financially viable — and to structure a contract that makes it investible — must be performed 500 times.
This is not a technology problem. The solar panels work. The batteries work. The inverters work. The problem is that the cost of processing the information required to deploy and operate these assets scales linearly with the number of sites, while the revenue per site is modest. The transaction cost per megawatt for distributed portfolios is, by some estimates, five to ten times higher than for utility-scale equivalents. For institutional investors accustomed to deploying capital in $50 million or $100 million tranches, the friction is prohibitive.
The Underwriting Problem
To understand why this friction is so persistent, it helps to walk through what underwriting a distributed energy asset actually involves. This is the process by which a developer or investor determines whether a particular site will generate adequate returns, and it is far more complex than most people outside the industry appreciate.
The first step is load profiling. For a commercial or industrial customer, the value of a behind-the-meter solar or storage system depends almost entirely on how it interacts with the building’s existing electricity consumption pattern. A warehouse with flat load around the clock will benefit differently from a battery than a restaurant with sharp evening peaks. Understanding this requires obtaining and cleaning interval meter data — typically 15-minute or hourly consumption readings over at least 12 months. This data is often incomplete, inconsistent, or provided in formats that require significant manual processing.
The second step is tariff analysis. Commercial electricity tariffs are extraordinarily complex. A single utility may offer dozens of rate schedules, each with different combinations of energy charges, demand charges, time-of-use periods, ratchet clauses, power factor adjustments, and standby fees. The financial value of a solar or storage system is the difference between what the customer pays under their current tariff and what they would pay with the system installed — and this calculation must account for every component of the tariff, including edge cases like demand charge ratchets that can persist for months. Many tariffs change annually. Some are being restructured entirely.
The third step is credit assessment. Unlike a utility-scale project backed by a creditworthy offtaker under a long-term power purchase agreement, distributed energy projects are typically backed by commercial or industrial customers whose creditworthiness varies enormously. A portfolio of 500 sites might include investment-grade corporations, mid-market manufacturers, small retailers, and municipal facilities. Each requires its own credit evaluation, and the portfolio-level risk modelling that investors require adds another layer of complexity.
The fourth step is financial modelling. Each site needs a bespoke financial model that incorporates the load profile, the tariff structure, the proposed system design, the applicable incentives (which vary by state, utility, and programme), the customer’s credit profile, and the expected degradation of equipment over time. The model must produce a return on investment that clears the hurdle rate for the capital being deployed, after accounting for operating costs, insurance, and reserves.
Finally, the fifth step is contract structuring and execution. Each site requires a contract — a power purchase agreement, a lease, or a service agreement — that allocates risk between the developer, the investor, and the customer in a way that all parties can accept. These contracts must be reviewed, negotiated, and executed individually.
Multiply each of these steps by hundreds or thousands of sites, and the cumulative cost becomes the dominant barrier to scale. It is not that any single step is impossibly difficult. It is that performing all of them, at volume, with the accuracy required for institutional-grade investment decisions, is extraordinarily labour-intensive. A typical energy developer might need a team of analysts working for weeks to fully underwrite a single commercial site. A portfolio of 500 sites could take a year or more to assess, by which point market conditions and tariff structures may have changed.
A Market Failure, Not a Technology Failure
Economists have a precise term for what is happening here: transaction costs. The concept, formalised by Ronald Coase and later elaborated by Oliver Williamson, describes the costs incurred in making an economic exchange beyond the price of the good itself — the costs of searching, negotiating, contracting, monitoring, and enforcing agreements. When transaction costs are high relative to the value of the transaction, markets fail to clear. Beneficial trades that should happen do not happen, because the friction of executing them exceeds the surplus they would generate.
This is exactly what is occurring in distributed energy. The returns on a well-sited commercial rooftop solar-plus-storage system are attractive — often superior, on a per-megawatt-hour basis, to utility-scale projects, because they offset retail electricity costs rather than selling into wholesale markets. The grid needs the capacity. The customer wants lower bills. The investor wants yield. But the cost of processing the information required to connect these parties exceeds what any of them can individually justify, particularly when spread across hundreds of small transactions rather than concentrated in a few large ones.
The consequence is that capital cannot respond to price signals at the grid edge. There are commercial buildings in every major metropolitan area where a solar-plus-storage system would generate double-digit returns, reduce the customer’s electricity costs, and provide valuable capacity to a constrained grid. But no one can identify, assess, and execute on these opportunities at a cost that makes the economics work at portfolio scale. The information is too fragmented, the analytical processes too manual, the operational requirements too site-specific.
This is a market infrastructure problem. In mature financial markets, the function of market infrastructure is precisely to reduce the cost of transacting so that capital flows to its highest-returning use. Stock exchanges, clearing houses, credit rating agencies, standardised contracts — all exist because, without them, the transaction costs of buying and selling securities would be prohibitively high. The distributed energy sector has nothing equivalent. Every transaction is bespoke. Every deal is hand-crafted. And so the market remains thin, illiquid, and inefficient, even as the underlying assets become increasingly attractive.
What Software Must Do
If the core problem is transaction costs, then the solution must directly attack those costs. This is where software enters — not as a general-purpose efficiency tool, but as market infrastructure that performs the specific analytical and operational functions that currently require armies of human analysts.
The underwriting pipeline described above — load profiling, tariff analysis, credit assessment, financial modelling, contract structuring — is, at its core, a series of data processing and analytical tasks. The inputs are structured and semi-structured data: meter readings, tariff documents, credit reports, equipment specifications, regulatory filings. The outputs are financial models and risk assessments. The work between input and output is pattern recognition, calculation, and validation — precisely the kind of work that modern AI systems, particularly agentic architectures that can chain multiple analytical steps together with self-validation, are suited to perform.
This is what we have built at Neura Energy. EASOS — the Energy Asset Software Operating System — is designed to automate the entire pipeline from raw site data to investment-ready underwriting, and from there through operational dispatch, monitoring, and portfolio management. The system ingests interval meter data, identifies the applicable tariff structures, models the optimal system configuration, generates financial projections, assesses credit risk, and produces the documentation required for institutional investment decisions. Each step is performed by specialised AI agents that validate their own outputs against known constraints and flag anomalies for human review.
The key insight is not that any individual step is particularly novel. Load profiling software exists. Tariff databases exist. Financial modelling tools exist. The breakthrough is in chaining these steps together into a single automated pipeline that can process hundreds or thousands of sites simultaneously, with the accuracy and consistency required for institutional-grade capital deployment. When you can underwrite a thousand sites as easily as ten, the transaction cost per megawatt drops by an order of magnitude. The economics that were always latent in distributed energy suddenly become accessible to the capital markets.
From Underwriting to Operations
Reducing transaction costs at the underwriting stage is necessary but not sufficient. Distributed energy portfolios also face ongoing operational costs that scale with the number of sites: dispatch optimisation, performance monitoring, billing reconciliation, warranty management, and regulatory compliance. A utility-scale solar farm has one SCADA system, one operations and maintenance contract, one revenue meter. A portfolio of 500 commercial systems has 500 of each.
Here again, the challenge is not technological but informational. The batteries work. The inverters communicate. The data flows. What is missing is the ability to process that data, at scale, into operational decisions: when to charge and discharge each battery to maximise value under a particular tariff, how to identify underperforming sites before small issues become expensive failures, how to reconcile hundreds of utility bills against expected production, how to ensure compliance with the specific interconnection and programme requirements that apply to each site.
EASOS extends the same agentic architecture into operations. Dispatch algorithms optimise each battery against its specific tariff structure and load profile in real time. Monitoring systems aggregate performance data across the portfolio and surface anomalies automatically. Billing reconciliation compares expected and actual savings for every site, every month. The result is that operational costs per megawatt for a distributed portfolio approach those of a utility-scale asset — not because the individual sites have become simpler, but because the software handles the complexity that used to require human attention for each one.
Making Distributed Energy an Asset Class
When both underwriting and operational costs per megawatt drop to levels comparable to utility-scale projects, something fundamental changes in how institutional capital views distributed energy. It is no longer a collection of small, idiosyncratic deals that require specialised teams and high management overhead. It becomes an asset class — a category of investment with standardised risk assessment, predictable returns, and scalable deployment.
This matters enormously for the pace of the energy transition. The constraint on distributed energy deployment today is not customer demand, which is robust. It is not hardware availability, which is abundant. It is not grid need, which is acute. The constraint is the volume of capital that can flow through the narrow aperture of current transaction processes. Widen that aperture, and deployment accelerates not incrementally but categorically.
The analogy to earlier waves of financial infrastructure is instructive. Before the development of mortgage-backed securities, residential mortgages were a fragmented, illiquid market. Each loan was originated, held, and serviced by a single institution. The transaction costs of buying and selling individual mortgages were too high for a secondary market to function. Securitisation — whatever its later excesses — solved this by creating standardised pools of loans that could be assessed, priced, and traded efficiently. The result was a massive increase in the availability of mortgage credit and, for decades, a functioning market that connected borrowers to capital at scale.
Distributed energy needs something analogous, not in the specific mechanism of securitisation, but in the underlying function: a reduction in information and transaction costs sufficient to allow capital markets to operate efficiently at the level of individual sites. The returns are there. The demand is there. The grid need is urgent. What has been missing is the ability to process the information required to convert opportunity into investment at the scale the moment demands.
The Window Is Now
There is a temporal dimension to this argument that bears emphasis. The forces converging on distributed energy — surging demand, transmission bottlenecks, falling hardware costs, rising retail rates — are not permanent in their current configuration. Policy environments shift. Supply chains adjust. Grid planners will eventually build more transmission, though the pace will remain slow. The window in which distributed energy can establish itself as a credible, scalable asset class is open now, but it will not remain open indefinitely on these terms.
The DER advocates of the 2010s were, by and large, correct in their analysis but premature in their timing. The grid did not yet need distributed resources urgently. Battery costs were still too high. The economics only worked in a handful of favourable markets. Today, every element of the thesis has matured. Battery packs at $108 per kilowatt-hour. Stationary storage systems at $70 per kilowatt-hour. Retail electricity prices rising 5 to 8 percent annually in most markets. Transmission construction at less than a tenth of what is needed. Interconnection timelines stretching to five years for centralised projects. The macro case has never been stronger.
But the industry risks squandering this moment if it cannot solve the transaction cost problem. Capital is patient, but it is not infinitely so. Institutional investors evaluating distributed energy today are not asking whether the returns are attractive — they can see that they are. They are asking whether the operational and analytical complexity of managing a portfolio of hundreds of disparate sites can be reduced to a level consistent with their cost structures and risk frameworks. They are asking, in effect, whether this is an investible asset class or merely a collection of interesting individual projects.
The answer to that question depends on whether the market infrastructure can be built to make it so. Distributed energy’s hardware costs have followed the same learning curves that made wind and solar the cheapest sources of new electricity generation in most of the world. What remains is for the soft infrastructure — the underwriting, the operations, the portfolio management — to undergo its own cost revolution. That revolution will not come from incremental improvements in spreadsheet-based analysis or marginal reductions in permitting timelines. It requires a fundamental rethinking of how distributed energy assets are assessed, deployed, and managed, using the tools that modern AI makes available.
This is the work we are doing at Neura Energy. Not because we believe distributed energy is a future possibility, but because we believe the future has arrived and the only remaining question is whether the market infrastructure will be built fast enough to meet it. The grid cannot wait for transmission that takes a decade to permit and build. The customers facing rising rates cannot wait for utility procurement cycles that move in multi-year increments. The capital looking for yield in a world of compressed returns cannot wait for deal-by-deal processes that take months per site.
The hardware is ready. The grid needs it. The economics work. What has been missing — and what must now be built — is the infrastructure that allows capital to flow to the grid edge as efficiently as it flows to any other asset class. That is what turns distributed energy from a promising technology into a functioning market.