5 min read

GPUs as Collateral

GPUs as Collateral
Source: Nvidia

The Ticking Clock

The numbers coming out of the "neocloud" space this week were, as is typical these days, completely dizzying. CoreWeave, the publicly-traded bellwether for this new category of AI infrastructure providers, saw its revenue more than double to $1.36 billion for the quarter. Its rival, Nebius, reported sales that soared over 300%, and announced a new multi-billion-dollar deal with Meta, adding to the company's already gigantic backlogs of contracted revenue.

On the surface, this is the story of the AI boom working perfectly. A new class of specialized companies has emerged to meet the insatiable demand for Nvidia's GPUs, and they are growing at a spectacular pace.

But there's a different, much wilder story happening underneath. To understand it, you have to ask a simple question: how does a startup with a few billion in revenue go out and buy tens of billions of dollars' worth of chips?

The answer, it turns out, is a highly leveraged financial trade built on a mountain of debt and a rapidly depreciating asset. These neoclouds are essentially GPU landlords. They borrow massive sums of money to buy racks of Nvidia's latest chips, and then, in a beautiful bit of financial engineering, often use those very same chips as collateral to borrow even more money. CoreWeave, for example, has raised over $21 billion in this fashion, and paid nearly $264 million in interest expenses in a single quarter.

This is a great business, as long as the value of your collateral holds up. The problem is, the collateral has a built-in self-destruct timer. That timer is called depreciation.

AI chips can become obsolete at a terrifying speed. Even Nvidia's own CEO, Jensen Huang, joked at a recent conference that once his new Blackwell chips start shipping, "you couldn't give Hoppers away." A report from DataCenterDynamics captured the industry's anxiety perfectly, with different players offering wildly different timelines for how long this hardware is actually useful:

In June, Amazon Web Services (AWS) announced that it was cutting costs for its instances with Nvidia H100, H200, and A100 GPUs - in some cases by as much as 45 percent.

Beyond that...[there is also] the impact of new generations of tech coming out.

"[Because of the increased pace of AI development], we're decreasing the useful life for a subset of our servers and networking equipment from six years to five years..." [AWS CFO Brian Olsavsky] said, adding that this will cut operating income this year by about $700 million. In addition, Amazon had "early-retired" some servers and networking equipment which cost about $920 million and is expected to decrease operating income in 2025 by about $600 million.

Even five years is quite a long useful life span to rely on, however, says Scaleway CEO Damien Lucas.

Lucas says that the neocloud is working on the assumption of a three-year depreciation rate.

He notes that, a year or so ago, a customer offered him a couple of thousand of Nvidia’s older V100 GPUs for free to run in the cloud. The company did the maths, and found that, after considering the power of the compute, energy costs, space constraints, it was cheaper to buy H100s instead.

This is the ticking clock that haunts the entire neocloud business model. It has even caught the attention of "Big Short" investor Michael Burry, who recently warned that the entire industry is underestimating the coming "depreciation tsunami."

So how do you run a specialized cloud business where your most valuable asset is designed to become worthless in 36 months? You do two things. First, you lock in your customers. The neoclouds are in a frantic race to sign long-term, multi-year contracts with big, stable companies that can guarantee the revenue needed to pay off the debt on the hardware before it becomes a pile of scrap.

Second, you tell a good story about "repurposing." The hope is that yesterday's top-of-the-line training chips can become tomorrow's workhorse "inference" chips. But as the DCD report notes, the industry is skeptical. Will there really be a robust market for old, power-hungry hardware when newer, more efficient options are available? No one really knows.

This is the real game being played by the neoclouds. It's a high-wire act of financial leverage, a race to generate cash flow from long-term contracts faster than the relentless march of semiconductor innovation erodes the value of their assets. It's a great business, right up until the moment it isn't.

More on Neocloud:

  • Why Microsoft is Spending $60 Billion on Neocloud for AI Workloads (ARPU)
  • Nebius Sales Soar as Neocloud Provider Inks Deal With Meta (Bloomberg)

On Our Radar

A curated sample from our Intelligence Desk. We deliver these signals customized for specific markets and roles. Learn more about getting your custom feed.

BofA's AI Budget

  • The Headline: Bank of America Focuses its $4 Billion Annual Tech Spend on Enterprise-Wide AI Platforms Over Siloed Tools (Fortune)
  • ARPU's Take: Bank of America has just laid out the new rules for selling AI to financial institutions: go big or go home. They're not buying features; they're buying enterprise-wide transformation and are looking to place massive, multi-year bets on a handful of strategic platform partners.
  • The Go-to-Market Implication: This signals a critical GTM bifurcation for vendors selling into the financial services industry: the market is splitting between small-scale departmental tools and large-scale, enterprise-wide platform deals. For AI and cloud providers, this means a successful GTM strategy requires a dedicated "majors" team focused on platform-level, C-suite engagement for the top banks. Vendors who continue with a high-volume, product-led growth model for generic tools will be locked out of the most lucrative, multi-billion dollar segment of the market.

Intel's Packaging Play

  • The Headline: Intel Develops Cost-Effective, Disaggregated Heat Spreader for Extra-Large Advanced Chip Packages (wccftech)
  • ARPU's Take: While everyone focuses on the silicon inside the chip, Intel is solving the boring but critical problem of the metal lid on top. Their insight: stop carving a complex lid out of one expensive block and start assembling it from cheaper pieces like a LEGO set. This is a manufacturing breakthrough that unlocks the ability to build the next generation of giant AI chips.
  • The Technology Implication: This is a major innovation in the thermal and mechanical engineering of advanced packaging, directly addressing a key physical bottleneck in creating larger AI chips. By shifting from a monolithic, CNC-machined lid to a multi-part, stamped assembly, Intel is solving the critical challenge of package warping and heat dissipation at extreme scale. This R&D breakthrough is a competitive shot at rivals like TSMC, aiming to make Intel's packaging technology (like Foveros and EMIB) more reliable and scalable for the most demanding AI accelerator designs.

P.S. Tracking these kinds of complex, cross-functional signals is what we do. If you have a specific intelligence challenge that goes beyond the headlines, get in touch to design your custom feed.