5 min read

The Overconfidence Engine

The Overconfidence Engine
Photo by Arthur A / Unsplash

Robot Forecasters

One thing you might use AI for is to get stock tips. You have a powerful new technology that has digested the entire internet; surely it has some thoughts on where Nvidia is going? It seems some researchers, writing in Harvard Business Review, had the same idea, so they ran a fun experiment. They took a few hundred business executives, showed them a chart of Nvidia’s soaring stock price, and asked them to predict where it would be in a month. Then they split the execs into two groups. One group got to talk it over with their human peers. The other group got to consult with ChatGPT.

You can probably guess what happened next. The humans got more cautious; the bots got more bullish. But what’s really great is that the study found that using ChatGPT actually made the executives worse at their jobs. The AI-assisted group was not only more optimistic, but their revised predictions were less accurate than their own initial guesses. Meanwhile, the executives who just talked to humans got better.

So why did the humans get more cautious while the AI got more bullish? The study's authors had a few ideas:

The act of discussing with peers introduced a different set of biases and behaviors, generally pushing toward caution and consensus. In a group discussion, individuals hear diverse viewpoints and often discover that others’ expectations differ from their own. In our peer groups, this dynamic often led to moderating extreme views and finding a middle ground.

Additionally, in professional settings, no one wants to be the person with an absurdly bullish forecast. There is a bit of “don’t be the sucker” mentality, because executives know that unbridled optimism can look naïve. In our view, this may have created a spiral of skepticism in the group setting: each person, consciously or not, trying not to appear too rosy-eyed, resulted in collectively lower forecasts.

The AI, on the other hand, feels no social pressure. It has no fear of being the sucker. It just looks at a line going up, spits out a bunch of smart-sounding reasons why, and figures it will keep going up. And because it presents its analysis with a blizzard of supporting data and a supremely confident tone, it convinces the humans to be overconfident too.

This is amusing, but it’s also a problem, because we are in the middle of a truly staggering investment cycle based on the premise that this technology will revolutionize every business. Jensen Huang talks about a multitrillion-dollar opportunity and a "1 billion X" increase in the need for inference compute. And yet, when you put the AI in a room with a seasoned executive and ask it to perform a core executive function—making a reasoned judgment about the future—it fails.

Of course, AI isn’t just being used as a flawed C-suite advisor. It’s also being deployed on the front lines, where its effects are equally strange. There was a great story in Bloomberg this week about call center workers who are now being mistaken for bots, because they are trained to follow rigid scripts and have their accents algorithmically altered. The AI is very good at being a robot, so humans are being trained to act more like robots to work alongside it.

And maybe that’s the real takeaway here. The executives in the HBR study were led astray by an AI that was confidently, articulately, and convincingly wrong. Meanwhile, their employees are being coached to emulate those same robotic qualities in the name of efficiency. It’s a strange sort of equilibrium. The people at the top are using AI to become worse prognosticators, while the people at the bottom are being trained to sound just like the thing that’s making their bosses worse.


The Design Chokepoint

Something happened in the US-China chip war this week: the US blinked. Or, more accurately, it blinked, un-blinked, and then blinked again, all within about a month. In late May, the White House imposed export curbs on the highly specialized software used to design semiconductors. On Thursday, it lifted them. Shares of the companies that make this software, Synopsys and Cadence Design Systems, which had previously tumbled, jumped on the news. Here is Reuters:

Shares of Synopsys and Cadence Design Systems jumped on Thursday after the U.S. lifted export curbs on chip design software to China, easing uncertainty around access to the crucial market.

The restrictions, announced in late May, had essentially cut off the market that brings over 10% of revenue for the industry's major players, hitting forecasts and knocking down shares.

The export resumption means both the companies will only lose one month of revenue in the current quarter, Mizuho analysts said.

Okay, so what is happening here? The US has been methodically working to control the chokepoints of the global semiconductor industry to slow China’s technological progress. We’ve talked a lot about the big, obvious chokepoints: the EUV lithography machines from ASML that are required to make the most advanced chips, and the AI accelerators from Nvidia that are required to train the most advanced models. The US has restricted China’s access to both.

The logical next step, if you are a chokepoint enthusiast, would be to go after the software. Electronic Design Automation, or EDA, is the incredibly complex software that acts as the digital blueprint for a modern chip. If you want to design an iPhone processor with billions of transistors, you need it. And the market for it is basically an oligopoly controlled by Synopsys, Cadence, and Siemens. So, banning its export to China seems like a straightforward way to hamstring Beijing’s efforts to achieve semiconductor self-sufficiency.

Except, whoops, it turns out this particular chokepoint is a bit of a double-edged sword. One problem is that China is a big market for these American software companies, accounting for over 10% of their revenue. Forcing them to abandon it hurts their bottom line and spooks their investors.

Another, more subtle problem is that the US government doesn’t just get to decide what happens in the business world. The market for corporate control is a powerful force, and right now, Synopsys is trying to close a massive $35 billion acquisition of engineering software firm Ansys. A deal that big needs regulatory approval from around the world, including China. By banning Synopsys from selling into China, the US government handed Chinese regulators a perfect, ready-made excuse to retaliate by blocking the Ansys deal. That seems… suboptimal.

And, of course, there is the most obvious problem of all: if you want to make absolutely sure that China pours billions of dollars into creating its own domestic EDA software industry to compete with yours, the best way to do that is to ban them from buying your software.

So the US seems to have concluded that this particular move was causing more headaches for its own champions than for its adversary. It’s a good reminder that weaponizing the nodes of a global supply chain is a messy business. This isn't a broad thaw in the chip war; the restrictions on advanced chips and manufacturing equipment are still very much in place. This was just a tactical retreat on one, very specific front where the US realized it was in danger of choking itself.


The Scoreboard

  • AI: Sutskever to Lead Safe Superintelligence After Meta Poaches CEO Gross in AI Talent War (Reuters)
  • AI: The Grammys Chief on How AI Will Change Music (WSJ)
  • Software: Microsoft to Lay Off About 9,000 Workers (WSJ)
  • Consumer: Apple’s China iPhone Sales Grows for the First Time in Two Years (CNBC)

Enjoying these insights? If this was forwarded to you, subscribe to ARPU and never miss out on the forces driving tech:

Subscribe