China's AI Math Is Different
Sign up for ARPU: Stay ahead of the curve on tech business trends.
Stacking Them Up
The US and China are in the middle of some very delicate trade talks this week, so naturally this was the perfect time for Huawei founder Ren Zhengfei to go on the front page of the People’s Daily and say that he isn’t worried about US sanctions. The core of the US strategy has been to cut off China’s access to the world’s most advanced chipmaking technology, specifically the exotic, bus-sized EUV lithography machines from ASML that are required to make the fastest, most powerful processors. Without those, the theory goes, China’s AI ambitions will hit a wall.
Ren’s counterargument was that if you can’t make one really, really good chip, you can just take a bunch of pretty-good chips and find clever ways to bundle them together. Chip stacking and clustering will get the job done, he said.
This is a fascinating front in the chip war. For decades, the name of the game, per Moore’s Law, was to shrink transistors to cram more of them onto a single, monolithic piece of silicon. The US strategy is based on controlling that process. But the industry is shifting toward a different model, built around “chiplets.”
Think of it this way: a traditional high-end Nvidia chip is like a single, exquisitely engineered skyscraper, where every function is built into one vertically integrated, hyper-efficient structure. What Huawei is proposing—and building—is more like a dense, brilliantly planned city block. You have separate, specialized buildings (chiplets for processing, memory, etc.) that are connected by an incredibly efficient subway system (the advanced packaging). The skyscraper is more elegant, but if you can’t get the permit to build it, maybe the city block gets the job done.
This is the official strategy. Here is the Wall Street Journal on how Huawei is competing at a systems level:
Given such constraints, Huawei executives have talked about focusing on building more efficient and faster systems to leverage their chips, instead of making individual chips more powerful.
In April, Huawei introduced the CloudMatrix 384, a computing system connecting 384 Ascend 910C chips. Some analysts said the system was more powerful than Nvidia’s flagship rack system, which contains 72 of Nvidia’s Blackwell chips, under some circumstances, even though the Chinese system consumes more power.
Of course, there are trade-offs. The city block approach is a brute-force method, and it is terribly inefficient. Analysts estimate Huawei’s systems consume vastly more power to achieve their performance, which is a huge problem in a world where data centers are already straining electrical grids. On the other hand, this might not matter as much as you'd think. Electricity costs in the U.S. are more than double those in China, which softens the blow of running less efficient hardware. If your power bill is less than half of your competitor's, maybe you can afford to be a little wasteful.
But the other, much bigger problem is software. The real magic, and the deepest part of Nvidia’s magic, isn’t just making the chips; it’s CUDA, the software ecosystem that allows thousands of those chips to work together as a single, coherent brain. This is a problem that Nvidia has spent nearly two decades and billions of dollars solving. Huawei has its own version, called CANN, but it is playing a desperate game of catch-up. Getting the hardware connected is one thing; getting it to think in unison is another thing entirely.
Still, what Ren’s comments make clear is that the chip war is no longer a simple fight over a single chokepoint. The US thought it could win by controlling the means of production for the most advanced silicon. But the battle is now expanding to chip design, packaging, and software. It's all very messy, and it’s not at all clear that anyone has a winning move.
Robotic Accountants
For a while now, the big idea in enterprise AI has been the “copilot.” You give your employees a chatbot, it sits next to them in their software, and it helps them do their jobs. It can summarize a long email chain, or maybe draft a reply. It’s a very smart intern who is good at looking things up. This is fine, and companies have been spending a lot of money on it.
But this week we got a glimpse of the next, more interesting phase. RSM, the fifth largest accounting firm in the US, announced it plans to invest $1 billion over the next three years in artificial intelligence. This is not for more chatbots. It’s to build and deploy “AI agents” to automate entire workflows.
An AI agent is not your helpful intern. It’s the junior associate you can delegate the whole project to. RSM found that giving its people AI copilots led to a productivity boost of maybe 5% to 25%. But having AI agents autonomously run a multi-step compliance review? The boost was up to 80%.
This is the sort of thing that gets a chief financial officer to write a billion-dollar check. It’s not about vague promises of future intelligence; it’s about a clear, immediate, and massive return on investment. The agent isn’t helping the human do the audit checklist faster; the agent is just doing the 100-page audit checklist. The human, presumably, does something else.
This is where the AI hype cycle starts to feel a little more real. While the big labs like OpenAI and Google are in an arms race to build ever-more-powerful models, the real action for most businesses isn’t about building their own AGI. It’s about taking those powerful models and stitching them into the boring, essential, and often tedious plumbing of their operations. The AI isn't just a feature in an app anymore; it’s becoming the workflow itself. This is the big reason why OpenAI now hit $10 billion in annual recurring revenue, less than 3 years since launching ChatGPT.
Of course, the stakes also get much higher. If your chatbot hallucinates a poem about bananas in the style of Shakespeare, that’s funny. If your accounting agent hallucinates that your client is compliant with a Sarbanes-Oxley disclosure, that is less funny. The risk moves from generating amusingly wrong text to taking catastrophically wrong actions. Trust, reliability, and security become exponentially more important, which is a problem, because we still don’t really understand how these things work on the inside.
Still, the potential for an 80% productivity gain is a powerful incentive. The tech industry has been selling the dream of autonomous agents for a while now. It seems the accounting industry is ready to start buying it.
Apple's Gambit
On its face, Apple’s big AI reveal at its Worldwide Developers Conference this week felt… well, a little modest. After a year of watching rivals release mind-bending AI models, Apple showed off live call translation and a feature to help you find a jacket you saw online. This is fine. The market seemed to agree, with the stock closing down on the day of the event. Here’s Reuters reporting on the reaction:
"In a moment in which the market questions Apple's ability to take any sort of lead in the AI space, the announced features felt incremental at best," Thomas Monteiro, senior analyst at Investing.com, said. Compared with what other big AI companies are introducing, he added, "It just seems that the clock is ticking faster every day for Apple."
But the really interesting thing wasn't a consumer feature at all; it was a strategic shift. For the first time, Apple is letting outside developers play with its foundational AI models.
This is a big deal. We’ve talked a lot around here about moats, and in the world of AI hardware, the moat to end all moats is Nvidia’s CUDA. For almost twenty years, if you wanted to do serious, high-performance computing on a GPU, you wrote your code for CUDA. An entire generation of AI researchers and developers grew up on it. It is an ecosystem so vast and entrenched that competitors like AMD are still, to this day, trying to build a bridge across it, mostly unsuccessfully.
Apple, having shown up very late to the AI party, does not have a CUDA-like moat. It doesn't even have a particularly impressive AI model. What Apple does have is an even bigger, more traditional moat: more than two billion active devices and an App Store that functions as the mandatory gateway to all of them. And so, Apple is now trying to leverage its existing, gargantuan moat to build a new, AI-specific one. The plan is simple: give developers the tools to bake Apple’s AI into their apps, and hope they build something so compelling that nobody wants to leave the iPhone, AI laggard or not.
Of course, there’s a catch. Developers aren’t getting the keys to Apple’s high-powered, cloud-based AI. Apple executives confirmed that developers will have access only to the company's smaller, on-device version of Apple Intelligence. This model, which runs directly on the iPhone or Mac, is about 3 billion parameters—a measurement of its sophistication—and, crucially, does not tap into the special data centers Apple has built for its more intensive AI efforts. A model of this size, while efficient, cannot handle the more complex, multi-step reasoning that massive cloud-based models from competitors can. For the really hard stuff, Apple itself is still calling up Sam Altman at OpenAI to borrow ChatGPT.
This puts developers in a slightly awkward position: they can use Apple’s simple, fast, privacy-focused AI for some things, but for state-of-the-art reasoning, they’ll have to make their own deal with OpenAI or Google, just like everyone else.
It’s a fascinating corporate chess match. Nvidia has the full-stack hardware and software moat. OpenAI, via Microsoft, has a firehose of powerful APIs and enterprise distribution. Google is trying a let-a-thousand-flowers-bloom approach with its own models and a push for an open agent ecosystem. And Apple is doing the most Apple thing possible: ignoring the fact that its core technology is behind, and instead betting that the power of its platform and the loyalty of its developers are strong enough to win the next war anyway. It's a classic playbook. We'll see if it works this time.
The Scoreboard
- AI: Apple Is Opening Its AI to Developers (ARPU)
- SaaS: Salesforce blocks other software firms from using Slack data (Reuters)
- Cloud Infra: Amazon to Invest $20 Billion in Pennsylvania to Expand Cloud Infrastructure (Reuters)
- Robotics: : Hitachi to Explore Purchases in Factory Automation (Nikkei Asia)
- Venture: Why VC Firm Eight Roads Is Giving Up on China (ARPU)
Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech: