Oracle's Over-Engineered Cloud
Sign up for ARPU: Stay ahead of the curve on tech business trends.
The $455 Billion IOU
Ordinarily, if you are a giant, publicly traded company and you miss your quarterly earnings, your stock goes down. Oracle's stock, apparently, did not get the memo. The company reported revenue and profit that fell short of expectations, and in response, its stock soared over 36% in its best single day since 1992, adding roughly $250 billion in market value.
The answer to this peculiar market reaction lies in a metric that has suddenly become more important than profit: the backlog. And this number isn't on the company's income statement. This week, Oracle revealed that its Remaining Performance Obligations (RPO)—a measure of future revenue already locked in by contract—had exploded by 359% in a year to a colossal $455 billion. This is not a forecast; it is a legally binding IOU from the future, signed by the biggest names in the AI arms race.
The situation is that the AI boom has created such a desperate scarcity of computing power that companies like OpenAI and Meta are no longer just renting cloud servers. They are signing massive, multi-year leases with Oracle for entire data center fleets to secure their place in the new world. This has transformed Oracle from a legacy software firm into a high-growth industrial landlord for the digital age.
The historic irony here is how Oracle is winning these massive deals. For years, its cloud product was a laggard, mocked by developers as "not even half baked." But the brute-force demands of AI training created a new set of rules. Training a frontier model isn't about running one application; it's about making thousands of GPUs act like a single, giant brain. The ultimate bottleneck in such a system is not the chips themselves, but the network that connects them.
The networks of hyperscalers like Amazon and Microsoft were built for general-purpose use—think of a massive public highway designed to handle unpredictable traffic from millions of different users. This design can lead to performance variability, or "noisy neighbors," which is fatal for a massive, synchronized AI job. Oracle's network, in contrast, was over-engineered from its database days to provide a dedicated, private lane for every customer. This non-oversubscribed architecture, combined with technology like RDMA that lets servers share data directly without bogging down the main processor, provides the exact kind of predictable, ultra-low-latency performance that massive AI clusters demand. The very architecture that made Oracle an unappealing choice for the broad, flexible needs of the first cloud war made it the perfect choice for the brute-force, high-performance needs of the second.
But winning the training race is just step one. This is where Oracle is placing its big bet for the next, and much larger, battlefield: enterprise inference. The goal is to securely connect AI models to a company's private data, and Oracle is leveraging the fact that most of the world's valuable corporate data already lives in an Oracle database. Here is Chairman and CTO Larry Ellison during the company's Q4 2025 earnings call, explaining this point:
Using AI, I mean, the current AI models are trained on the Internet, otherwise known as publicly available data. Do you think JPMorgan Chase makes all of their internal data publicly available? Do you think those AI models train on JPMorgan Chase's data? Companies want to be able to use AI models on top of their own data. That is essential. Oracle applications make all of the data inside the Oracle applications available to AI models like Grok or ChatGPT...We have all of those LLMs in our cloud or in the Oracle Cloud. So this is our value proposition.
And so, for Wall Street, the logic is now clear. A missed quarter is a rounding error; a half-trillion-dollar backlog is destiny. The only problem is that a backlog of this size is not just a promise of future revenue; it's a monumental test of Oracle's operational ability to actually build all the factories to fulfill it. But that's for another day.
The Sanction Boomerang
The basic logic of a sanction is to deny your opponent a critical resource, thereby slowing them down. For several years, the US has tried to do just that to China's artificial intelligence industry by restricting its access to the advanced semiconductors made by Nvidia. The strategy seemed to be working. But now, a funny thing is happening: China's tech giants are starting to build their own.
This week, reports emerged that Alibaba and Baidu have begun using their own internally designed chips to train some of their AI models, a significant step toward weaning themselves off Nvidia. Here's Reuters:
China's Alibaba and Baidu have started using internally designed chips to train their AI models, partly replacing those made by Nvidia, The Information reported on Thursday, citing four people with direct knowledge of the matter.
Alibaba has been using own chips for smaller AI models since early this year, while Baidu is experimenting with training new versions of its Ernie AI model using its Kunlun P800 chip, the report said.
This is not a complete abandonment of Nvidia; both companies still rely on its processors for their most cutting-edge work. But the development signals a critical shift. The primary driver, of course, is US policy. Washington's export controls created a supply crisis for Chinese firms, while at the same time, Beijing was exerting immense pressure on them to "buy Chinese." The result is a powerful two-pronged incentive to develop homegrown alternatives.
The most critical part of this story is that China's chips are apparently becoming "good enough." The report indicates that Alibaba's Zhenwu chip can now compete with Nvidia's H20, the scaled-down version the US permits to be sold in China. "Good enough" is the key phrase here. China's chips don't need to be the best in the world to be a massive threat. If they can effectively handle the majority of AI training tasks, they can capture the vast domestic market, allowing Chinese companies to secure their supply chain and iterate on their own hardware-software ecosystem without waiting for permission from Washington.
For Nvidia, this is a direct threat to a major market already under pressure. For the US, it raises a more profound question. The goal of the sanctions was to maintain a significant technological lead. Instead, the policy appears to have acted as a powerful catalyst for China's own semiconductor industry. The short-term pain imposed by sanctions may be leading to the long-term strategic consequence of a truly decoupled and formidable rival.
The Scoreboard
- Cloud Infra: Alibaba to raise $3.2bn via convertible bond to fund cloud growth (Nikkei Asia)
- Self-driving: Amazon’s Zoox jumps into U.S. robotaxi race with Las Vegas launch (CNBC)
Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech: