5 min read

The Anti-Nvidia Alliance

The Anti-Nvidia Alliance
Photo by BoliviaInteligente / Unsplash

Nvidia Has Some Competition

One of the fundamental laws of the modern tech economy is that all AI-related capex eventually flow to Nvidia. It is a force of financial gravity, pulling in tens of billions of dollars from every hyperscaler and sovereign wealth fund with an AI dream. If you are building AI, you are, in some way, sending a check to Jensen Huang.

But the past few days felt different. The market loves a good challenger narrative, and this week it got two, sending news outlets and trading desks buzzing. In a flurry of what seemed like coordinated moves, two of Nvidia's biggest rivals came out with news that took aim at the very heart of its AI compute empire.

First, AMD announced that cloud giant Oracle would deploy 50,000 of its upcoming AI chips, hot on the heels of a massive deal with OpenAI. Then, Broadcom unveiled a new networking chip optimized for "scale-out" architectures in hyperscale AI environments and designed to compete with Nvidia's own.

You see the problem. This isn't just one competitor getting a little better. This is a multi-front assault on the hardware fortress Nvidia has built. The story of Nvidia's hardware dominance has always rested on two integrated parts:

  1. The GPUs, the best-in-class engines of AI.
  2. The Proprietary Networking that links thousands of GPUs together into a single brain.

For the first time, there are now credible challengers attacking both of those fronts at once.

On the GPU front, AMD has finally broken through. Securing both Oracle and OpenAI as major customers for its upcoming MI450 chips means it's no longer just a secondary option; it's a primary supplier. Now, let's put this in perspective. In its most recent quarter, AMD's entire data center business posted revenues of $3.2 billion. Nvidia's, by contrast, was $41.1 billion—more than 12 times larger. The Oracle and OpenAI wins for AMD are significant, but financially, it is still operating in a different solar system. The excitement, of course, comes from the law of small numbers: it’s much easier to grow quickly from a $3 billion base than a $41 billion one, and the market loves an acceleration story.

Then there’s the networking, the data center's nervous system. Nvidia has long dominated this space with its own high-speed technology like InfiniBand. But now Broadcom is launching its "Thor Ultra" networking chip, aiming to offer an open, Ethernet-based alternative. Here's Reuters:

The Thor Ultra will battle Nvidia's networking interface chips and aim to further entrench Broadcom's control of network communications inside data centers designed for AI applications.

It comes after Broadcom on Monday unveiled a deal to roll out 10 gigawatts worth of custom chips for ChatGPT maker OpenAI beginning in the second half of 2026, challenging Nvidia's grip on the AI accelerator market.

But the attacks on Nvidia's individual products are only half the story. The real vulnerability lies in its customer list. Consider the numbers from Nvidia's latest quarter:

  • Just two unnamed customers accounted for a staggering 39% of total revenue.
  • Its top six customers together represented 85% of all sales.

This extreme concentration means Nvidia's fate is inextricably tied to the spending decisions of a tiny handful of companies. And you know who we're talking about: the hyperscalers. This is the context for the quietest, but perhaps most significant, front now opening up—one that offers those very customers an escape route.

This is the world of custom silicon, or ASICs, and it is booming. The biggest hyperscalers have a powerful incentive to design their own specialized AI chips to avoid paying the full "Nvidia tax," and Broadcom has become their undisputed gun-for-hire. According to S&P Global, who just upgraded Broadcom's rating to 'A-', the company is now the "second-largest AI semiconductor provider behind Nvidia" and the clear "market leader in custom ASIC solutions."

The scale of this shift is very noticeable. S&P notes that AI chips went from just 14% of Broadcom's semiconductor sales in fiscal 2023 to a staggering 57% in its most recent quarter. Its custom chip revenue hit $5.2 billion with a forecast for $6.2 billion in the next. Again, while this is a fraction of Nvidia’s total, the acceleration is what has investors excited. Every custom chip that Google, Meta, or OpenAI builds with Broadcom is a high-end GPU it doesn’t have to buy from Nvidia, representing a direct bypass of its core business.

Of course, Nvidia is still the king. But the era of its undisputed monopoly seems to be ending. What we're seeing is the formation of a sort of anti-Nvidia alliance, where customers can now, for the first time, plausibly build a top-tier AI system by picking and choosing components. The fortress is still standing, but the customers are finally getting some leverage.


On Our Radar

A curated look at key signals from the ARPU Intelligence Desk

Salesforce's AI Agent Offensive

  • The Headline: Salesforce has launched Agentforce 360, an AI platform designed to deploy autonomous agents across its entire suite of enterprise products. (Reuters)
  • ARPU's Take: This is Salesforce's move from passive AI (offering suggestions with Einstein) to active AI (executing tasks with agents). It’s a critical defensive maneuver to prevent its massive customer base from being poached by AI-native startups promising hyper-automation. By embedding agents directly into the core platform, Salesforce is increasing customer dependency and creating powerful new up-sell opportunities.
  • The Implication: The bar for enterprise software has been raised. Having an "AI assistant" is no longer enough; the new competitive standard is an "AI agent" that can execute multi-step tasks. This puts immense pressure on all SaaS players to move beyond chatbots and deliver tangible, autonomous workflow automation to prove their value.

Nvidia's AI Power Grid Solution

  • The Headline: Nvidia has tapped specialized chipmaker Power Integrations to supply advanced power-handling chips for its new, more energy-efficient AI data center architecture. (Reuters)
  • ARPU's Take: This move shows that Nvidia recognizes a critical bottleneck to its own growth: the astronomical power consumption of AI data centers. By building an ecosystem of specialized partners like Power Integrations, Nvidia is proactively solving the energy problem to ensure its roadmap isn't constrained by grid limitations. For Power Integrations, getting Nvidia’s stamp of approval is a massive validation that plugs it directly into the lucrative AI hardware supply chain.
  • The Implication: The AI boom is creating a "second-order" gold rush for the companies that provide the picks and shovels. The focus is expanding beyond GPUs to the entire infrastructure stack, especially power and cooling. This signals that the AI value chain is deepening, creating significant opportunities for specialized component makers who can solve the immense physical challenges of scaling supercomputers.

These are select insights from the ARPU Intelligence Desk. The full platform provides the continuous signal tracking and comparative analysis required for professional work.

If this level of intelligence is a fit for your needs, we invite you to request a brief demo.