Google's Three-Front War
Sign up for ARPU: Stay ahead of the curve on tech business trends.
Google vs. Everybody
Alphabet's latest earnings call was, on its face, a victory lap. The company posted strong double-digit growth in its core Search business, its Cloud division is accelerating, and CEO Sundar Pichai touted "breakthroughs in performance" from its new Gemini AI model. But the real story wasn't in the headline numbers. It was in the strategic map Pichai laid out, which revealed that Google isn't just competing with OpenAI's ChatGPT. It's fighting a multi-front war for control of the entire artificial intelligence ecosystem, a conflict that pits it against a different set of powerful rivals at every layer of the technological stack.
The AI race, for Google, is not a simple duel between chatbots. It is a much more complex, and vastly more expensive, battle for vertical dominance. One way to think about it is that Google is simultaneously fighting three distinct wars:
- The Infrastructure War. The opponent here is not OpenAI; it is Nvidia. The AI revolution runs on specialized chips, and Google is waging a complex campaign on two fronts. First, it is building its own custom silicon. Pichai highlighted the company's seventh-generation Tensor Processing Unit (TPU), designed to bypass the hefty "Nvidia tax"—the high margins Nvidia charges for its market-dominating GPUs. At the same time, Google can't afford to fall behind, so it is also a massive Nvidia customer, announcing it will be among the first to offer its next-generation Blackwell and Vera Rubin chips. This dual strategy is what's fueling a massive $75 billion capital expenditure forecast for 2025.
- The Model War. This is the familiar fight, pitting Google's Gemini directly against models from OpenAI and Anthropic. Here, Google is asserting its comeback, with Pichai noting that Gemini 2.5 Pro "debuted at number one on the Chatbot Arena by a significant margin." Beyond its flagship model, Google is also competing with Meta in the open-source arena with its Gemma models. The strategy seems to be to achieve dominance not through one single model, but by winning across a range of AI categories, from the most powerful closed systems to the most widely adopted open ones.
- The Product War. At the top of the stack is the battle for the user, where Google's primary rival is Microsoft. While Google is embedding Gemini into its massive ecosystem of 15 products with over half a billion users each, Microsoft is doing the same with its Copilot assistant across Windows and Office. This is where Google's scale is its greatest weapon. Its "AI Overviews" in Search are already being used by 1.5 billion users every month. This front is less about who has the smartest model and more about who can master distribution and embed their AI most deeply into the daily workflows of billions of people.
The whole thing is a fascinating corporate chess match. Google's bet is that owning the entire "full stack"—from the chip in the data center to the final application on the screen—will be the only true defensive moat in the AI era. It is an incredibly ambitious and expensive strategy. The question now is whether it represents a brilliant move toward vertical integration, or a company being forced to fight on too many fronts at once.
Movable Data Center
One way to think about the AI infrastructure race is that it is a contest to build the biggest possible computer. The prevailing logic is that you find a large, flat piece of land, secure a power contract the size of a small country's GDP, and spend several years and many billions of dollars erecting a massive, centralized data center—a stationary brain to power the digital world.
The script for the AI buildout seemed straightforward enough. And then came the movable data centers. A startup called Armada just raised $131 million from a serious group of investors, including Founders Fund and Microsoft's venture arm, to do precisely the opposite. Instead of building giant, immovable brains, they are packing AI data centers into shipping containers. It is worth asking why the smart money is suddenly betting on AI infrastructure that you can put on the back of a truck.
The logic behind packing a data center into a shipping container is, it turns out, quite simple. Bloomberg explains:
Armada makes pods and shipping containers packed with servers that function as modular AI data centers. Its newest product, Leviathan, has 1 megawatt of compute capacity, 10 times that of its previous model. The systems are meant to be located in remote areas to make use of nearby power sources and can be up and running in weeks with a smaller up-front cost than a larger, traditional data center.
The appeal of this approach comes down to a few hard, physical realities that the centralized cloud model has always struggled with. For many critical applications, the cloud is simply too slow and too far away. If you are running a factory robot that needs to make a real-time decision, you cannot afford the milliseconds of latency it takes to send camera data hundreds of miles to a data center and wait for a response. If you are an oil rig generating terabytes of seismic data, it's often impossible to push all of that information over a satellite link. And if you are a military unit on a battlefield, you are probably not keen on sending sensitive data to a commercial server in another state.
This is the thesis behind "edge AI": for the most important tasks, you need to bring the computer to the data, not the other way around. Armada's solution is a literal interpretation of that idea. It also neatly sidesteps the biggest bottleneck facing the giant data center builders: power. Instead of trying to convince a utility to build a new power plant for your campus, you can just drop your shipping container next to an existing, underutilized power source in a place like North Dakota.
There is also, of course, a geopolitical angle. Armada's CEO talks about making sure "the world runs on America's AI stack," and one of its investors emphasizes the importance of being able to deploy that stack "anywhere." In this view, a shipping container full of Nvidia chips is not just a data center; it is a tool of foreign policy, a way to quickly project technological influence into allied nations without the messy, multi-year process of an international construction project. Which raises the question: is the AI race a marathon to build the biggest computer, or a sprint to put smaller ones everywhere that matters?
The Scoreboard
- Semiconductor: Intel Is Cutting More Jobs as CEO Tan Tries to Fix Manufacturing Missteps (Reuters)
- AI: Why Walmart Is Overhauling Its Approach to AI Agents (WSJ)
- Social Media: X to Test Using Community Notes to Find the Posts Everyone Likes (TechCrunch)
Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech: