Foxconn Now Makes More From AI Servers Than iPhones
Sign up for ARPU: Stay ahead of the curve on tech business trends.
The Great AI Re-platforming
For fifteen years, the story of Foxconn has been, in essence, the story of the iPhone. The Taiwanese giant became the world's indispensable manufacturer by mastering the intricate, high-volume assembly of Apple's flagship product. Its fortunes rose and fell with the iPhone supercycle. This quarter, for the first time ever, that story changed. Foxconn earned more money building AI servers than it did from its Apple business.
This is not just a shift in product mix; it is a symbolic turning point for the entire global technology hardware ecosystem. Foxconn, as the world's manufacturing barometer, is providing the first concrete proof that the economic center of gravity in tech has moved from the consumer smartphone to the industrial-scale AI data center.
The divergence in the company's business is stark. While Foxconn downgraded its full-year outlook for consumer electronics from "flattish" to a "decline," its AI business is exploding. AI server revenue grew more than 60% in the second quarter and is projected to grow over 170% in the third, on its way to surpassing $33 billion for the full year.
The pivot is also a masterclass in geopolitical agility. The biggest risk for a global manufacturer is no longer a weak product cycle, but unpredictable policy. As Foxconn's rotating CEO Kathy Yang explained on the company's investor call:
The real challenge is not about tariff itself, but the uncertain variability of the policy, which is a real test of a company's agility. The manufacturing sector cannot be moved immediately. It needs planning in advance.
This is the new reality that Foxconn was built for. With 230 campuses worldwide and logistics teams in 25 countries, the company can re-route supply chains and shift production to navigate the shifting winds of the US-China tech war. It is simultaneously a key partner in Trump's "Made in America" push—building AI server capacity in Houston—while also serving the Chinese market through its Shenzhen-listed subsidiary.
Ultimately, this is a story about re-platforming. The smartphone, the dominant platform of the last era, is maturing. The new platform, the AI data center, is in a state of explosive, demand-fueled growth. Foxconn, as the biggest supplier to both Nvidia and Apple, has a unique, front-row seat to this transition. While its Apple business faces headwinds, its role as the primary assembler of Nvidia's powerful GB200 server racks has become its new engine. This milestone is not just about Foxconn; it's a leading indicator for the entire hardware economy.
The Curse of Scale
The essential fact of modern AI is that it is incredibly expensive to run. You can build a model that seems magical, but every time a user asks it a question, it costs you a little bit of money. If you have 700 million users asking questions every week, it costs you a lot of money.
And so we have the peculiar case of OpenAI's GPT-5. The launch was supposed to be a victory lap, a moment for the undisputed leader to cement its status. Instead, the rollout has been, in CEO Sam Altman's words, "a little more bumpy than we hoped for." Users have flooded social media with complaints that the new, supposedly smarter model has a "colder tone," feels "bland," and struggles with simple tasks. Here is the Wall Street Journal on the user reaction:
Juliette Haas, an account-strategy coordinator at a communications and crisis-management agency, primarily uses the paid version of ChatGPT for brainstorming and to complete administrative tasks such as creating a to-do list.
With the release of GPT-5, she decided to revisit a business-development prompt to figure out which companies or individuals at her firm would require her support. With GPT-4, the response suggested that she build strong industry connections and emphasized the importance of relationship building. GPT-5 delivered a checklist.
"The AI treated finding distressed companies more like a data-science problem rather than understanding the fundamental considerations of relationships and timing," said Haas.
This is a fun little paradox. An AI company releases its most advanced model ever, and the user base complains that it feels dumber. You could look at this and think "software bug," but the more interesting answer is "economics." The turbulence around GPT-5 is the first major, public symptom of a fundamental problem: the AI revolution is slamming headfirst into the hard physical limits of compute and cost.
The issue is "inference"—the technical term for the computational work required for every user query. While training a model is a massive one-time expense, inference is a recurring, operational cost that scales with every single one of OpenAI's 700 million weekly users. This creates a brutal trade-off. Every "creative" or "warm" response that users loved in older models requires more processing power, and therefore costs more to generate. At a planetary scale, those fractional costs add up to billions of dollars. The new 200-question weekly limit is a direct form of rationing. The "colder personality" isn't a bug; it's a feature of a system designed to be more economically efficient at an unprecedented scale.
The obvious solution is to simply buy more computers. But the problem is that everyone else is also trying to buy more computers. The AI industry is in an all-out war for two things: Nvidia's specialized chips and the massive amounts of energy needed to power them. Supply for both is severely constrained. The demand is so high that companies face multi-year waits for grid access and are now in the business of building their own power plants, as seen with the $500 billion "Stargate" project. Even OpenAI's billions cannot magic away these physical constraints.
A resource-constrained leader, of course, creates an opening for competitors. Niche players like Anthropic can focus their expensive compute on smaller, high-value user groups—like coders and enterprise clients—who are willing to pay a premium for consistent, high-quality performance. While OpenAI is forced to engineer a product that is "good enough" for the masses, rivals can capture the classes.
The GPT-5 stumble, then, reveals that the biggest challenge for AI is no longer just research. It is now an industrial-scale engineering and supply chain problem. The struggle is not just about creating the world's most advanced AI; it's about figuring out how to deliver it to nearly a billion people without going bankrupt in the process.
The Scoreboard
- Social Media: Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info (Reuters)
- Cloud Infra: CoreWeave tanks 20% after posting wider-than-expected loss ahead of lock-up expiration (CNBC)
- Internet: Google faces loss of Chrome as Perplexity bid adds drama to looming breakup decision (CNBC)
Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech: