December 10th, 2025
NVIDIA’s proposed $100 billion infrastructure partnership with OpenAI often described as the largest AI-compute project ever announced, remains incomplete. At the UBS Global Technology & AI Conference, NVIDIA CFO Colette Kress clarified that the agreement is still at the letter-of-intent stage, with no definitive contract in place. Her comments offered a more restrained view of a deal that previously generated massive hype and expectations across both the AI and financial sectors.
The original September announcement suggested a massive expansion of AI infrastructure: up to 10 gigawatts of NVIDIA systems deployed for OpenAI over the coming years. That scale would represent millions of GPUs, purpose-built data-center installations, and a multiyear pipeline of hardware demand. For many, the size alone implied inevitability, that the partnership would move straight from announcement to execution. Kress’s latest remarks show that the story is far more nuanced.
According to NVIDIA, none of the potential hardware orders associated with the OpenAI agreement are included in the company’s more than $500 billion in disclosed data center bookings through 2026. That omission signals that the deal, despite its headline figure, has not yet translated into committed revenue. Public filings go a step further, noting there is “no assurance” a definitive agreement will be completed on expected terms, or at all. These disclosures add uncertainty at a time when AI infrastructure demand is widely assumed to be straightforward and guaranteed.
Above: OpenAI logo with the NVIDIA headquarters in Santa Clara, California in the background. Photo by David Aughinbaugh II for CircuitRoute.
When the deal was first announced, the market reacted quickly. Investors interpreted the announcement as confirmation that OpenAI planned to secure long term access to NVIDIA hardware, potentially reshaping supply flows across the industry. The 10 GigaWatt (GW) figure, in particular, stood out not only for its magnitude but for what it implied about the next era of model training and deployment. An expansion of that size would require large scale commitments in land, power, cooling, and network capacity, and it raised expectations that OpenAI would accelerate its infrastructure plans dramatically.
The clarification that no contract has been signed alters that trajectory. Without a binding agreement, the timing, scale, and architecture of any future deployment remain unsettled. Even if a deal eventually materializes, execution would depend on the availability of suitable energy infrastructure, data centers construction, and alignment between NVIDIA’s hardware cycles and OpenAI’s model development plans. These dependencies make the project far more uncertain than early reactions suggested.
Another factor influencing the discussion is concern around what analysts describe as vendor financed deals. NVIDIA has backed several AI startups that later become major customers, raising questions about how much of the demand is market driven and how much is reinforced by the vendor’s own investments. If structured similarly, the OpenAI arrangement would fit into that pattern, drawing criticism over whether projected hardware demand fully reflects the real state of the market.
Beyond financial structure, the strategic implications remain significant. If completed, a 10 GW deployment would cement NVIDIA as OpenAI’s primary compute supplier for years. That alignment could influence model-training strategies, cluster design, and the competitive positioning of alternative hardware vendors. Conversely, the absence of a finalized agreement preserves flexibility for OpenAI to diversify across suppliers or adjust its infrastructure footprint as new architectures mature.
The hesitancy visible in NVIDIA’s filings also reflects the realities of long term hardware planning. GPU product cycles are getting shorter, with new architectures arriving faster than in past generations. Committing to multi year, multi billion dollar purchases requires confidence that the supplied hardware will remain competitive throughout the deployment window. Both companies face a rapidly shifting environment, and that volatility may be contributing to the slower pace of finalization.
Beyond the specifics of this deal, the delay speaks to the complexity involved. It shows how quickly ambitious announcements can be read as certainty, even when based on early stage commitments. It also highlights the hurdles companies face when attempting to scale compute capacity at unprecedented levels. Building and powering 10 GW of high density AI systems requires coordination across utilities, real estate partners, regulators, and supply chains, each shaped by its own constraints and timelines.
Whether the NVIDIA and OpenAI deal ultimately closes remains an unanswered question. The possibility is still on the table, and its completion would significantly reshape forecasts for AI hardware demand. But until both companies sign a definitive agreement and outline clear delivery milestones, the project is best viewed as a large opportunity still under negotiation rather than a locked in infrastructure plan.
For now, the clarification serves as a reminder that even in a period of rapid AI growth, ambitious announcements often move on slower, more cautious timelines once they encounter practical constraints.
