Nvidia's Landlords
Sign up for ARPU: Stay ahead of the curve on tech business trends.
Who's the Customer Here?
There is a standard way that business relationships work. A company like Nvidia sells the essential tools—the picks and shovels—for a technological gold rush. Its customers, the giant cloud computing companies like Amazon, Microsoft, and Google, buy those picks and shovels in absolutely mind-boggling quantities to build out their AI gold mines. The customers then sell access to those mines to corporate clients. Everyone makes a lot of money; the roles are clear.
And then there is the new way, which is a bit weirder. A Wall Street Journal report highlights Nvidia’s DGX Cloud service, which, stripped to its essentials, means Nvidia is now in the business of competing with its own biggest customers. The weirdest part is that its customers are helping it do this. Here’s the WSJ on this arrangement:
Under DGX Cloud’s unusual arrangement, the cloud giants buy and manage equipment—including Nvidia’s chips—that forms the backbone of the service. Then Nvidia leases back that equipment from them and rents it out to corporate clients. It also offers access to its AI experts and software as part of the package.
Right. So your biggest customers are now also your landlords, and you are subletting their own space back to a new set of tenants, who are their potential customers.
Why on earth would anyone agree to this? Well, the hyperscalers have found themselves in a bit of a bind. They are in a frantic arms race to build AI capacity, collectively dropping around $63 billion on capital expenditures in just the first quarter of 2025 . They’ve ordered so many of Nvidia’s expensive GPUs that they have to make sure the chips are generating revenue the second they’re plugged in. Leasing some back to Nvidia is a guaranteed way to de-risk that massive capital outlay. Demand is insatiable, and it's better to get a check from Nvidia than to have a billion dollars of silicon sitting idle.
For Nvidia, the logic is simpler. This is a capital-light way to move “up the stack.” Instead of just selling the hardware, it can now capture the recurring margins of cloud services. It also builds direct relationships with end-users, strengthening its famous software ecosystem that makes its hardware so sticky and difficult to replace.
Of course, this is all part of a bigger, more complicated chess match. The hyperscalers are not just sitting back and letting their supplier eat their lunch. They are all frantically trying to dig their way out of this dependency by designing their own custom AI chips, both in-house (Google's TPU, Amazon's Trainium) and with help from Nvidia’s rivals like Broadcom. The goal is to eventually replace Nvidia’s high-margin chips with their own, which they can produce at cost.
What you have is a weird, delicate dance of “co-opetition,” a world where your biggest customer is also your landlord, your partner, and your future top competitor. For now, the AI pie is growing so fast that there seem to be plenty of slices to go around. But it's hard to imagine this particular arrangement stays this friendly forever.
The Silicon Is In The Skulls
One way to think about building an AI company is that it's a capital-intensive project. To even get a seat at the table, you need to hoard GPU chips and secure data center capacity on a scale that costs billions. Your single biggest expense is a truly biblical cloud-computing bill, paid just for the privilege of training your models. Another, increasingly popular, way to think about it is as an acqui-hire for a company that doesn't exist yet. The assets aren't the servers; they're the people who know the magic words to say to the servers.
This creates a weird tension for investors. You are making a venture-style bet on human capital, but with the capital requirements of a sovereign wealth fund. The prevailing logic now seems to be that the human capital is the only part you should really underwrite. If you get the right people, they can presumably go find some chips. The chips cannot go find the right people. The most recent, and most spectacular, example comes from Mira Murati, the former chief technology officer of OpenAI. Here’s the Financial Times:
OpenAI’s former chief technology officer Mira Murati has raised $2bn for her new artificial intelligence start-up, in a deal which values the mysterious six-month-old company at $10bn.
San Francisco-based Thinking Machines Lab had not declared what it was working on, instead using Murati’s name and reputation to attract investors, said those familiar with the fundraise. …
There was scant information on what the company is working on, however. In February, it said it aimed to make “AI systems more widely understood, customisable and generally capable”, without providing further details.
So, a famous name and a mission statement about making AI “generally capable” will get you a $2 billion seed round at a $10 billion valuation. The FT's Robin Wigglesworth dubbed this “pre-plan” venture capital, which is as accurate as it is absurd. And it’s a competitive space! Murati’s former OpenAI co-founder, Ilya Sutskever, is also in the pre-plan business. He recently raised the same $2 billion for his new company, Safe Superintelligence, except his PowerPoint deck was probably much better, fetching a cool $32 billion valuation.
What is going on here? Well, the new M&A playbook in AI is less about buying a company and more about acquiring its people. In June, Meta Platforms agreed to a $14.3 billion deal for a 49% stake in Scale AI, an investment structured largely to bring its CEO, Alexandr Wang, in-house to lead a new “superintelligence group.” This followed a pattern of similar “acqui-hires” by Microsoft (of Inflection AI) and Google (of Character.AI), where the prize isn’t a revenue stream, but a team of brains.
The competition for this finite pool of talent has driven compensation to almost comical levels. OpenAI’s CEO Sam Altman recently claimed that Meta was offering his researchers seven- to nine-figure signing bonuses to jump ship. The market is setting a price not on products, but on people, and that price is very, very high.
There are a couple of ways to think about this. One is that this is simply the frothy peak of a talent bubble, fueled by investors who are terrified of missing the next great platform shift and are willing to write enormous checks for a team with the right pedigree. That’s plausible.
The other way to think about it is that the mission has changed. The goal is no longer just to build a better model; it is a race to build Artificial General Intelligence (AGI). In this race, having a handful of the world's top researchers is seen as the only ticket to entry. The VCs aren’t funding a business plan; they’re funding a quest. It’s a fascinating new market where the founders are not just the controlling shareholders; they are, in fact, the only asset that matters.
The Scoreboard
- AI: Meta Wins AI Copyright Lawsuit (Guardian)
- Software: AI Is Doing 30%-50% of the Work at Salesforce, CEO Marc Benioff Says (CNBC)
- Cloud: Coreweave in Talks to Buy Core Scientific (Reuters)
Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech: