4 min read

The AI Handholding Economy

Adoption-Impact Gap

We can think of enterprise AI adoption like a nationwide gym membership drive. In 2025, every company signed up. Executives touted their commitment to fitness on earnings calls. Employees logged into the app. Everyone has the membership card. The only problem is, almost nobody is getting stronger. And now, it seems, they are starting to cancel their memberships.

After three years of relentless hype, the data is starting to show something unexpected: AI usage at work is not just plateauing; in some cases, it's actually falling. According to an estimate from The Economist, based on US Census Bureau data, the percentage of Americans using AI at large companies dropped from 12% to 11% in October. For bigger companies with over 250 employees, the number of firms reporting no recent AI use has jumped from 62% in February to nearly 69%.

This dip in usage is perhaps the consequence of a deeper problem. A recent BCG survey found that while a majority of employees have used AI, a staggering 60% of companies are not generating any material value from it. A separate Forrester survey showed that just 15% of executives saw profit margins improve due to AI over the last year. The tools are being tested, but they are not yet delivering, leading to what some are calling "AI fatigue."

The reason for this disconnect is simple: the industry is confusing a feature with a revolution. BCG identifies five stages of AI adoption. The first two are "Information Assistance" (a better Google) and "Task Assistance" (a smarter intern). This is where almost everyone is today. The real value—the stuff of investor dreams—happens in Stage Four: "Semiautonomous Collaboration," where AI agents become true digital employees.

But the technology isn't there yet. AI operates on a "jagged frontier." It is a genius at complex tasks but a moron at simple ones. It can be a "Ferrari in math but a donkey at putting things in your calendar," as LMArena's Anastasios Angelopoulos put it. It can write code but can't consistently remember the safety rules for a Canadian railroad. This makes it fundamentally unreliable for the kind of autonomous work that defines a true "agent."

This disillusionment is also playing out in public markets. Last month, shares of Workday plunged nearly 8% after its earnings report. The numbers were fine, but after a year of touting its AI momentum, the company's guidance for future growth was flat. The market had been promised a revolution, but the guidance suggested little more than a modest software update.

This isn't just a Workday problem; it's a sector-wide reality check. The same story has been unfolding at Salesforce, where CEO Marc Benioff's grand vision of an "agentic enterprise" keeps colliding with tepid financial forecasts. It is a pattern now repeating across the software landscape. Workday and Salesforce have seen their stocks fall over 20% in the past year, alongside a cohort of pure-play SaaS companies like Adobe, Atlassian, and DocuSign, all of whom have yet to meaningfully monetize AI at the application level.

The promise of AI was software that would finally automate labor. The reality at the moment is a new, expensive human services industry built to babysit the software. As Reuters reported, both OpenAI and Anthropic are building out teams whose entire job is to provide "handholding" to clients, embedding with companies to make the AI tools actually useful.

This is the current stagnation of the AI boom. The initial burst of low-hanging fruit—automating simple queries, summarizing documents—has been picked. The next phase, the truly transformative agentic phase, is proving to be far harder and more expensive than the marketing slides suggested.

The agentic revolution was pitched as a product with immense margins. What's being delivered is a high-touch consulting service with human-sized costs. The future of AI, for now, looks suspiciously like a services business.

More on AI Adoption:

  • The Widening AI Value Gap (BCG)
  • The Number of People Using AI at Work Is Suddenly Falling (Futurism)

On Our Radar

Our Intelligence Desk connects the dots across functions—from GTM to Operations—and delivers intelligence tailored for specific roles. Learn more about our bespoke streams.

Intel's AI Bargain Buy

  • The Headline: Intel is in advanced talks to acquire AI chip startup SambaNova Systems for approximately $1.6 billion, a significant discount from its peak valuation, in a move to bolster its AI product offerings. (Bloomberg)
  • ARPU's Take: The key takeaway is the price tag. It is an opportunistic acquisition of a struggling Nvidia competitor at a steep discount. It's a sign that the AI chip startup boom is now a consolidation game, and Intel is using its renewed financial firepower to buy its way into the race. The deep involvement of Intel's CEO as a former investor and chairman of SambaNova makes this a uniquely informed—and potentially complicated—insider deal.
  • The Product Question: Acquiring SambaNova for $1.6B (a ~70% discount from peak) is a "fire sale" acquisition of talent and IP. It suggests Intel is giving up on organically catching Nvidia with its existing Gaudi roadmap and is instead buying a new architecture (Reconfigurable Dataflow Unit) to restart its AI offensive. However, integrating yet another disparate architecture into Intel's OneAPI stack creates immense technical debt and developer confusion, potentially fragmenting their software ecosystem further.

Nvidia's Open Source Power Play

  • The Headline: Nvidia has acquired SchedMD, the company behind the popular open-source AI workload scheduler Slurm, in a move to deepen its control over the AI software ecosystem. (Reuters)
  • ARPU's Take: Nvidia isn't just buying a software company; it's acquiring the "air traffic control" system for the entire AI data center. By embracing and becoming the steward of a critical open-source tool that customers already use and love, Nvidia is making its entire hardware platform stickier and even harder for rivals to displace.
  • The Product Question: This acquisition is a defensive move to protect the CUDA moat. As clusters grow to hundreds of thousands of GPUs, scheduling efficiency becomes the bottleneck. By owning the scheduler, Nvidia can ensure that its hardware utilization remains high, preventing "idle time" that kills ROI for customers. It also allows Nvidia to upsell "premium" enterprise support for Slurm to its DGX Cloud customers, adding another high-margin software revenue stream.

P.S. Tracking these kinds of complex, cross-functional signals is what we do. If you have a specific intelligence challenge that goes beyond the headlines, get in touch to design your custom intelligence.


You received this message because you are subscribed to ARPU newsletter. If a friend forwarded you this message, sign up here to get it in your inbox.