7 min read

The Napkin Math of Intelligence

The Napkin Math of Intelligence
Photo by Levart_Photographer / Unsplash

Programming note: Our next issue lands 6 February, where we'll take a look at the value capture in the AI buildout.

$1 for $1 Revenue

The conversation around a potential AI bubble has grown louder, yet the pace of capital expenditure from hyperscalers shows no signs of slowing. This week alone, Microsoft reported a record $37.5 billion in capital spending for its last quarter, a 66% year-over-year jump. Not to be outdone, Meta boosted its own 2026 capex forecast to as much as $135 billion in its pursuit of "superintelligence".

There are different ways to dissect the economics of this AI build-out, but the most logical place to start is at the source. To understand the napkin math of the AI economy, we should first look at the OG AI lab that triggered this cycle: OpenAI. In its recent business update, titled A business that scales with the value of intelligence, the company offers the closest thing we have to a Rosetta Stone for the economics of the AI industry.

So let's look at the spreadsheet.

Based on OpenAI's own figures, the company is generating roughly $20 billion in Annual Recurring Revenue (ARR) using approximately 1.9 gigawatts (GW) of compute. For simplicity, this means you get a yield of roughly $10 billion in revenue for every gigawatt of compute deployed.

Now, we have to look at what it costs to get a gigawatt. In an interview with the New York Times, Anthropic CEO Dario Amodei noted that building a one-gigawatt data center costs roughly $50 billion. In the technology industry, hardware typically has a five-year depreciation cycle—not necessarily because the chips stop working, but because the next generation of chips makes the old ones look like pocket calculators.

The napkin math, then, is quite straightforward:

  1. You spend $50 billion to build the infrastructure (1 GW).
  2. You depreciate that asset at $10 billion per year, over 5 years.
  3. You generate roughly $10 billion per year in revenue from that asset.
  4. The rough math looks like $10 billion revenue for every $10 billion of capital depreciation.

Whether OpenAI finances this as a capital expense on its own books or as a massive, long-term rental agreement with hyperscalers like Microsoft and Oracle, the fundamental economic pressure remains the same. This describes a business that is effectively a high-end utility. For every $10 billion in revenue OpenAI brings in, it must pay $10 billion to account for the fact that its machines are wearing out or becoming obsolete. And that is before the opex—electricity bills, researcher salaries, and sales & marketing expenses that go into operating a global-scale AI software.

In this light, the business of "scaling intelligence" is currently a business of managing an expensive, low-margin infrastructure project.

The Quality of the Token

There is, however, a variable that the napkin math doesn't quite capture, which is the quality of that $20 billion in ARR.

In a traditional software boom, revenue comes from enterprises paying to solve a problem (e.g., "I pay Salesforce $100 to consolidate my customer data so I can make $500"). In the AI boom, we don't yet have visible and granular data on the source of the revenue. We know the volume of tokens being sold, but we don't know who is buying them for what.

There are, in principle, two types of revenue at the top of the AI funnel:

  1. Productive ROI: An enterprise (like a bank or an automotive company) pays for Claude or ChatGPT because it makes their employees 25% more productive. This is "real" economic demand.
  2. Speculative ROI: A venture-backed startup raises $100 million in seed funding and immediately spends $20 million of it on OpenAI API tokens to build an "AI-powered" something-or-other.

In the second scenario, the AI labs are not yet capturing value created; they are capturing a portion of the venture capital flowing into the ecosystem. If a significant chunk of the revenue growth in the frontier labs is simply the recycling of VC capital, then the industry hasn't yet proven it has a sustainable bottom-line return for its customers.

B2B vs B2C

If you are an AI lab and you are looking at this napkin math—where your depreciation costs nearly match your revenue—you have to make a choice about how to capture more margin and/or expand revenue sources.

The industry has essentially split into two camps based on this choice.

The first camp, inhabited by Anthropic in the U.S. and Mistral in Europe, is moving toward the "Managed Service" model. They have looked at the commodity nature of the raw AI token and decided that the only way to make the math work is to get into the enterprise segment, where margins are more certain and better than the consumer side.

As Mistral CEO Arthur Mensch explained in a recent interview, the sheer complexity of implementing AI requires a level of service far beyond just selling an API:

[Generative AI] is a new technology and a new platform. The knowledge of how to use it is actually still pretty scarce. There aren't that many people that can build systems that are performing at scale reliably and can actually solve an actual issue.

When working with enterprises, you always need to have some services on top because of the complexity of implementation, even with fairly well-understood technology like databases. For artificial intelligence, it's even more necessary in that it requires transforming businesses. You need to help in thinking how the team should perform around the system itself. I do expect the software part in those deployments to increase. The way customization occurs today—fine-tuning, reinforcement learning—this is going to be abstracted away from the enterprise buyer because it's too complex. They should just worry about having adaptive systems that are learning from experience.

In other words, these AI labs are becoming something like high-end consultancies or system integrators. If you are a shipping company trying to automate cargo dispatching, they won't just sell you a token; they will help you build the orchestration, the guardrails, and the custom implementation. This creates "stickiness" and allows them to charge for expertise, not just electricity.

The second camp is OpenAI. They are focusing on the consumer segment, and betting on the "Attention" model. As they announced recently, OpenAI is moving into the commercial advertising business. If the subscription revenue from regular users doesn't comfortably cover the $10B-per-GW depreciation bill, you have to find a higher-margin business to bolt onto the side. Advertising, it turns out, is a rather high-margin business, and Google has already proven the model works.

By testing ads at the bottom of ChatGPT responses, OpenAI is admitting that they are no longer just a research lab or a software company; they are a discovery engine. They are fighting Google for the most valuable intent data on the internet. If you ask ChatGPT what kind of running shoes to buy, the recommendation is the utility; the ad for the Nike store next to it is where OpenAI's revenue is.

The Circular Ecosystem

The final observation is that the end-users—the consumers and the companies using AI—aren't yet paying enough to cover the cost of the build-out.

The funding hole that OpenAI and other AI labs is projected to face isn't being bridged by your $20-a-month subscription. It is being footed by the "middle" of the chain—the hyperscalers (Microsoft, Amazon, Google) and the chipmakers (Nvidia).

So what is the endgame for this massive capex spend? The logic echoes previous infrastructure booms, like the fiber optic build-out of the late 1990s. That cycle saw massive investment lead to a bubble and a bust, but the underlying capacity eventually allowed the internet to flourish, enabling the next generation of tech giants.

The current AI build-out is not being funded for immediate, quarterly ROI. It is a play to lay the digital railroads and power grids for the next decade of the economy. Microsoft gives OpenAI billions of dollars in credits and cash; OpenAI uses that money to buy chips from Nvidia; those chips are then installed in Microsoft's data centers so that OpenAI can sell tokens back to the market. The bet is that even if the first wave of applications struggle, the underlying infrastructure will become indispensable.

This brings us to the market's ultimate question (and concern): In this multi-trillion-dollar build-out of physical and digital assets, what is actually capturing what? Is the infrastructure capturing the future value of software, or is the software simply a vehicle to depreciate the infrastructure?

And that is a topic for another day.

More on AI Business Model:

  • A business that scales with the value of intelligence (OpenAI)
  • Can AI companies become profitable? (Epoch AI)

On Our Radar

Our Intelligence Desk connects the dots across functions—from GTM to Operations—and delivers intelligence tailored for specific roles. Learn more about our bespoke streams.

Meta Signs $6B Deal with Corning for Data Center Fiber Optics

  • The Story: Meta has signed a multi-year deal worth up to $6 billion with Corning to supply fiber-optic cables for its AI data centers, a move that also supports Corning's expansion of its U.S. manufacturing capacity. (Reuters)
  • The Operations Implication: This large, long-term supply agreement is a move by Meta to de-risk its supply chain for a key component needed for its massive AI data center build-out. By securing a multi-year, multi-billion dollar commitment, Meta locks in a reliable supply of high-performance fiber optics, mitigating potential shortages and price volatility, which is essential for the predictable and timely execution of its aggressive infrastructure expansion plans.

Microsoft Challenges Nvidia with In-House Chip and Software

  • The Story: Microsoft has unveiled its second-generation in-house AI chip, "Maia 200," and is critically pairing it with a software toolkit, including Triton, that directly competes with Nvidia's proprietary CUDA software. (Reuters)
  • The Product Implication: This is a move to build a vertically integrated AI stack and directly attack Nvidia's most significant competitive advantage: its CUDA software lock-in. By developing a parallel software ecosystem for its own silicon, Microsoft is attempting to create a viable alternative platform for developers within Azure, a fundamental product strategy to capture more of the AI value chain.

P.S. Tracking these kinds of complex, cross-functional signals is what we do. If you have a specific intelligence challenge that goes beyond the headlines, get in touch to design your custom intelligence.


You received this message because you are subscribed to ARPU newsletter. If a friend forwarded you this message, sign up here to get it in your inbox.