Why Google Cloud Platform Is Fueling OpenAI
Sign up for ARPU: Stay ahead of the curve on tech business trends.
The battle lines in AI seemed clearly drawn: OpenAI, backed by Microsoft's cloud, was on one side, and Google, with its own DeepMind AI lab and massive infrastructure, was on the other. This week, those lines were blurred in a startling way. In a deal that reshapes the industry's competitive landscape, OpenAI is tapping its archrival, Google, for its cloud computing services.
The move underscores a fundamental truth about the current AI boom: the thirst for raw computing power is so insatiable that it creates the strangest of bedfellows. But for Google, the deal represents more than just a major new cloud customer; it's a massive strategic victory for its custom-built AI chips and a direct challenge to Nvidia's dominance.
Why is OpenAI looking beyond Microsoft?
The simple answer is scale. Training and running frontier AI models like those behind ChatGPT requires a staggering amount of computational power, or "compute." While Microsoft has invested billions and is OpenAI's primary partner, the demand for compute is growing so exponentially that no single provider can meet all of OpenAI's needs. This has forced OpenAI to aggressively diversify its infrastructure partners to avoid bottlenecks.
This strategy isn't new. OpenAI has already announced the ambitious $500 billion "Stargate" data center project with Oracle and SoftBank and has signed multi-billion dollar deals with specialized cloud providers like CoreWeave. Adding Google to the mix is the logical next step in a relentless quest to secure every available petaflop of processing power. It reduces OpenAI's critical dependency on a single partner (Microsoft) and gives it access to a different kind of powerful infrastructure.
What's in it for Google?
On the surface, it's a huge win for the Google Cloud Platform (GCP), which has long trailed Amazon Web Services and Microsoft Azure. Landing the world's most famous AI company as a customer is a major coup. But the deal also creates a fascinating competitive paradox for Google. Here's how Scotiabank analysts see the dynamic, as reported by Reuters:
The deal ... underscores the fact that the two are willing to overlook heavy competition between them to meet the massive computing demands. Ultimately, we view this as a big win for Google’s cloud unit, but ... there are continued worries that ChatGPT is becoming an incrementally larger threat to Google’s search dominance.
Beyond that strategic tension, the real story is the hardware underneath. The infrastructure OpenAI will use at Google isn't primarily built on the Nvidia GPUs that power Microsoft's Azure offerings. Instead, it relies on Google's own custom-designed chips: Tensor Processing Units (TPUs). For over a decade, Google has been investing in its own silicon, creating a powerful and efficient alternative to Nvidia's hardware that was, until recently, mainly used for its internal projects like Search and Gemini.
By winning OpenAI's business, Google has secured the ultimate third-party validation: its TPU-powered cloud is now proven to be robust enough for the world's most demanding AI workload.
Is this a threat to Nvidia's dominance?
In a word, yes. The AI industry currently runs on a "Nvidia tax." Fabless chip designers like Nvidia enjoy gross margins estimated to be around 80-90% on their high-end data center GPUs. This means that cloud providers like Microsoft pay a massive premium for the hardware they supply to customers like OpenAI.
Google, by designing and deploying its own TPUs, effectively bypasses this tax. Industry analysis suggests that Google may be able to run its AI compute at a fraction of the cost—perhaps as low as 20%—of companies buying Nvidia GPUs. OpenAI's decision to use Google's infrastructure signals that this cost-performance trade-off is now compelling enough to attract the biggest players. It demonstrates that a powerful, viable, non-Nvidia ecosystem for training and running large-scale AI now exists. While Nvidia’s CUDA software platform remains a powerful moat, this deal shows that for the right price and performance, even the biggest AI labs are willing to work on alternative hardware stacks.
What are the risks for Google?
The arrangement creates a fascinating and messy internal dilemma for Google. By selling its powerful and cost-efficient TPU capacity to OpenAI, the Google Cloud division is directly arming the very company that poses the most significant threat to Google's core search advertising business. Every query answered by ChatGPT is a query not asked on Google Search. This move forces Google to balance the lucrative, immediate revenue of its cloud business against the long-term strategic defense of its consumer-facing monopoly.
Furthermore, Google is reportedly facing its own internal compute capacity constraints. In an April earnings call, Alphabet's CFO Anat Ashkenazi noted that the company already lacked sufficient capacity to meet all of its cloud customers' demands. Allocating precious TPUs to a direct rival will only intensify the internal competition for resources between Google's cloud customers, its own consumer products like Gemini, and its DeepMind research division, all of whom are clamoring for more power.
Reference Shelf:
OpenAI taps Google in unprecedented cloud deal despite AI rivalry, sources say (Reuters)
The Cost of AI Compute: Google’s TPU Advantage vs. OpenAI’s Nvidia Tax (ARPU)
OpenAI and Oracle to Power Massive "Stargate" AI Data Center with Nvidia Chips (ARPU)