The Business Model of God
Sign up for ARPU: Stay informed with our newsletter - or upgrade to bespoke intelligence.
Programming note: ARPU will be off Monday, back on Wednesday.
OpenAI's Reality Check
The story of OpenAI's original business plan is already the stuff of Silicon Valley legend. Back in 2019, Sam Altman proposed a simple three-step process: build a God-like intelligence, ask it for a monetization strategy, and then execute that strategy. Here's Altman in 2019 on OpenAI's revenue model:
The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we've build this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return for you.
It sounds like an episode of Silicon Valley, it really does... but it is what I actually believe is going to happen...
But if AGI is on the table, that is actually ok. If we have to take one thousandth of 1% of the value that we create and return to investors and then figure out how to share the rest equally among the world... that feels okay.
It was a great plan because it outsourced the difficult part of capitalism—having a viable commercial strategy—to a machine that didn't exist yet.
Six years later, the machine exists, but the plan has hit a snag. The current level of intelligence isn't quite ready to solve the mysteries of the universe, but the server bills are due immediately.
And the one product meant to pay those bills is suddenly looking vulnerable. This week, as the Wall Street Journal reported, OpenAI declared a "code red."
This is not a code red because the AI has become sentient and is trying to seize control of the nuclear codes. It is a code red because ChatGPT starts to lag behind competition, with Google Gemini crushing it both in terms of LMArena leader board and user reviews.
Altman told employees that the company needs to pause its ambitious, sci-fi projects—like autonomous agents for shopping or the "Pulse" personal assistant—to focus on the mundane reality of the "day-to-day experience." The company needs to improve speed, reliability, and personalization. It turns out that before you can build a digital deity that reshapes the fabric of reality, you first have to build a chatbot that doesn't hallucinate the weather report.
Why does a dip in chatbot quality warrant a crisis meeting? Because OpenAI cannot afford to lose its lead if it wants to survive the economics of AI. The panic is driven by a specific, terrifying number.
To turn a profit by 2030, OpenAI reportedly needs to grow its revenue to $200 billion, according to its own internal projections. For context, that is roughly the size of Google's entire Search business (FY2024: $198 billion). But Google has a distinct advantage: it already has a profitable search monopoly to subsidize its AI experiments. OpenAI has a massive burn rate, a competitor in Gemini that is rapidly stealing its user base, and a few AI models that are increasingly getting commoditized.
And when you need to generate Google-sized revenue without Google's established monopoly, you eventually start to mimic their tactics.
This reality is forcing a convergence that is deeply ironic. Earlier this year, Wired noted that OpenAI's new shopping feature "shares many similarities to Google Shopping," complete with lists of retailers and buy buttons. While OpenAI insisted that these weren't paid placements, the trajectory is clear.
OpenAI's code red is a tacit admission that before you can monetize God AI, you have to win the chat war. The company started with a mission to create a new form of intelligence, but the tech competition is forcing it to pause its grandest revenue goals just to make sure its core product is faster and smarter than Google's.
And the company realized that "intelligence" alone is not a moat. Google has intelligence, too, plus distribution, plus an ecosystem. OpenAI wants to build a research lab that would eventually gift the world God-level intelligence. Instead, they are finding themselves trapped in the most brutal, traditional capitalist fight of all: trying to keep a consumer product from churning to a competitor.
More on Large Language Models:
- OpenAI has trained its LLM to confess to bad behavior (MIT Technology Review)
- Large Language Models Will Never Be Intelligent, Expert Says (Futurism)
- AWS doubles down on custom LLMs with features meant to simplify model creation (TechCrunch)
On Our Radar
Our Intelligence Desk connects the dots across functions—from GTM to Operations—and delivers intelligence tailored for specific roles. Learn more about our bespoke streams.
Nvidia's Inference Moat
- The Headline: Nvidia has released data showing its newest AI server can run popular new "mixture-of-expert" models, including those from Chinese AI labs, 10 times faster than its previous hardware, reasserting its dominance in the growing AI inference market. (Reuters)
- ARPU's Take: This is a classic Nvidia power play. After a new, more efficient AI model architecture ("mixture-of-experts") emerged that threatened to reduce reliance on its training hardware, Nvidia has turned the tables. It has demonstrated that its complete server system makes these new models run even better for inference, effectively absorbing a competitive threat and turning it into a new strength.
- The Product Question: This data release solidifies Nvidia's system-level design as its key competitive differentiator. For product leaders at rival chipmakers, this establishes a new and much higher bar for competition. The strategic challenge is no longer just to design a single, powerful AI chip, but to engineer an entire, integrated server platform with high-speed interconnects, a far more complex and capital-intensive endeavor.
AI Capex Skeptic
- The Headline: IBM CEO Arvind Krishna has publicly questioned the financial viability of the massive, gigawatt-scale AI data centers being built by hyperscalers, arguing that the multi-trillion dollar capital expenditure required has no clear path to profitability. (Data Center Dynamics)
- ARPU's Take: This is a calculated jab from an industry veteran. Krishna is using the sober logic of a traditional enterprise tech company to pour cold water on the "growth at all costs" AGI hype cycle. It's both a genuine financial critique and a strategic move to position IBM as the sensible, ROI-focused alternative in the AI market.
- The Operations Question: This public skepticism from a major industry leader injects a much-needed dose of financial discipline into the AI hype cycle. For investors and boards overseeing the multi-trillion dollar AI buildout, this forces a critical debate: are these massive data centers a sound, long-term infrastructure investment with a clear path to ROI, or a speculative, venture-style bet on a technological revolution with an unproven business model?
P.S. Tracking these kinds of complex, cross-functional signals is what we do. If you have a specific intelligence challenge that goes beyond the headlines, get in touch to design your custom intelligence.
You received this message because you are subscribed to ARPU newsletter. If a friend forwarded you this message, sign up here to get it in your inbox.