2 min read

OpenAI and Oracle to Power Massive "Stargate" AI Data Center with Nvidia Chips

OpenAI and Oracle to Power Massive "Stargate" AI Data Center with Nvidia Chips
Photo by Jordan Harrison / Unsplash

OpenAI and Oracle are set to equip the first phase of their ambitious "Stargate" AI data center project with a staggering 64,000 of Nvidia's latest GB200 Blackwell chips, according to a report by Bloomberg. The initial rollout, slated for completion by summer 2026, will see 16,000 of the powerful AI accelerators installed at a facility in Abilene, Texas, with the remainder to be added in phases by the end of 2026.

The scale of this order underscores both the immense computational demands of cutting-edge AI and Nvidia's continued dominance in the AI hardware market. The GB200, Nvidia's newest generation of data center chips, is experiencing unprecedented demand. Nvidia CFO Colette Kress recently described the Blackwell ramp as "the fastest product ramp in our company's history," with $11 billion in Blackwell revenue in Q4 2024 alone.

The Stargate project, a $100 billion joint venture between OpenAI, SoftBank, and Oracle, was unveiled at a White House event in January, highlighting its strategic importance in the race to build advanced AI infrastructure. This comes as the US government is taking increased interest in domestic semiconductor manufacturing, with recent talks between TSMC and the Trump administration regarding a potential stake in Intel's foundry unit.

While Nvidia hasn't publicly disclosed the price of the GB200, CEO Jensen Huang previously stated that the less powerful B200 chip costs between $30,000 and $40,000. This suggests a multi-billion dollar investment for the Abilene facility alone. The full Stargate project envisions up to ten such sites, signaling a massive long-term commitment to AI infrastructure. OpenAI and Softbank have reportedly scouted locations in Pennsylvania, Wisconsin, and Oregon for further Stargate expansion, and Salt Lake City, where Oracle already has cloud capacity, is also on the table.

The move comes amidst a broader surge in data center spending by hyperscalers, driven by the explosive growth of generative AI. Companies like Meta Platforms are rapidly expanding their AI capabilities, with Meta planning to have computing power equivalent to 600,000 Nvidia H100s (a previous-generation chip) by the end of 2024.

While Nvidia currently leads the AI chip market, competition is intensifying. Companies like AMD, Broadcom, and Intel are vying for market share, and the rise of custom ASIC solutions presents another challenge. Furthermore, the growing importance of AI inference workloads, as opposed to training, could shift the competitive landscape. Nvidia's Blackwell architecture is specifically designed for efficient inference, potentially giving it a long-term advantage.

An OpenAI spokesperson stated the company is collaborating closely with Oracle on the design and delivery of the Abilene facility, with Oracle responsible for operating the supercomputer. Oracle did not respond to Bloomberg's request for comments. Nvidia declined to comment.