Loading Now

OpenAI and Nvidia’s circular deals are emblematic of a speculative bubble—A repeat of the dot-com era in the making?

OpenAI and Nvidia’s circular deals are emblematic of a speculative bubble—A repeat of the dot-com era in the making?

OpenAI and Nvidia’s circular deals are emblematic of a speculative bubble—A repeat of the dot-com era in the making?


In reality, it has become increasingly common in the AI ecosystem to have circular revenue loops that bind AI platforms to their suppliers in ways that blur the line between investment, infrastructure and demand forecasting. The New York Times counts seven such transactions at Open AI in a recent article (shorturl.at/ucOqY)

The structure is simple but potent. OpenAI commits billions to purchasing compute capacity, chips, or data centre access. The suppliers build or lease out the infrastructure. Often, these same companies also benefit from equity arrangements, profit shares, or long-term purchase guarantees that ensure they profit not just from supplying OpenAI but from its continued expansion. In some cases, they even invest in OpenAI, closing the loop entirely.

In the chip/compute arena, the Nvidia relationship is emblematic. It supplies the AI chips OpenAI depends on. Simultaneously, Nvidia is investing capital into OpenAI and its affiliates. Thus, OpenAI spends vast sums on Nvidia’s chips while Nvidia positions itself to benefit from OpenAI’s valuation and growth. Money flows in a tight circle, and the supplier becomes a stakeholder.

Data centre development follows the same pattern. OpenAI has committed hundreds of billions of dollars towards new data centre projects in partnership with Oracle, SoftBank and specialized infrastructure builders. These sites are purpose-built for AI workloads and will reportedly require over 10 gigawatts of capacity—equivalent to the energy needs of some small countries.

OpenAI foots the bill but in many cases, these vendors remain owners or long-term lessors of the capacity, effectively renting the future back to OpenAI.

Even OpenAI’s global expansion reflects this logic. The company has announced sovereign partnerships to build national AI infrastructure. OpenAI advises governments on AI strategy, helps fund or design data centre capacity and ensures that its own models are run on those very systems. The result: OpenAI subsidizes the infrastructure, governments endorse the partnership, and OpenAI becomes the default AI provider within each jurisdiction.

These arrangements are not limited to a few vendors. AMD has secured a multi-billion-dollar contract to supply OpenAI with AI chips. CoreWeave, a startup that builds AI data centres has a $22 billion purchase deal from Open AI. The field has become a dense web of mutual dependencies, where nearly every major player is a supplier, partner, or customer of the others.

The numbers are staggering. Between chip supply agreements, cloud compute contracts and data centre build-outs, OpenAI is estimated to have committed over $1.15 trillion in infrastructure spending between 2025 and 2035, says Tom Tunguz.

These are not notional bets. They are legally binding obligations, many tied to hardware delivery, minimum usage levels or exclusive access terms. To sustain these numbers, OpenAI would need to reach $577 billion in revenue by 2029, a large jump from $10 billion in 2024 (shorturl.at/JE4Lm).

AI now finds itself in a parallel moment. Revenue from AI applications—whether via ChatGPT subscriptions, enterprise APIs, or licensing deals—is growing, but not yet at a pace that justifies the scale of infrastructure spend. Training costs for frontier models are escalating. Energy requirements are surging. Regulatory pressure is mounting. There is a profound mismatch between what AI companies expect the market to deliver and what the market has thus far validated.

OpenAI and its peers are pricing in exponential expansion as a certainty. Contracts are being signed with 10-year horizons and capital deployed today to serve demand that may not materialize for years.

The result is a system prone to reflexivity.

Demand projections justify infrastructure investment. Infrastructure deployment justifies more capital raising. Capital raising inflates valuations. Valuations justify further infrastructure spend. It is a self-reinforcing loop, sustained so far only by belief.

This structure bears hallmarks of past speculative cycles. The dot-com era saw massive bandwidth and server deployments based on the presumption that internet demand would grow linearly.

US and global telecom companies grew capacity at a breakneck pace, a build-up in which I participated, funded by future revenue assumptions that never fully materialized. Much of that infrastructure became stranded. The railroad boom of the 19th century and the electrification craze of the early 20th century bore witness to the same pattern.

The AI sector is witnessing a textbook case of speculative investment: massive capital allocations made not on proven economics but on projected dominance. The circular revenue relationships between OpenAI and its vendors amplify this risk.

The AI revolution may be real, but its business model is, at present, speculative by design. And as history has shown, speculative cycles rarely end on schedule. They end at the precipice.

The author is co-founder of Siana Capital, a venture fund manager.

Post Comment