2 min read

Nvidia CEO: AI's Next Frontier is Reasoning, Requiring "Hundred Times More Computing"

Nvidia CEO: AI's Next Frontier is Reasoning, Requiring "Hundred Times More Computing"

The artificial intelligence landscape is undergoing a fundamental shift, moving beyond simply generating text and images to tackling complex reasoning and problem-solving, according to Nvidia CEO Jensen Huang. This transition, he argues, will require a massive increase in computing power, driving unprecedented demand for Nvidia's AI infrastructure.

Speaking with Jim Cramer on CNBC's "Mad Money," Huang contrasted the current state of AI with its future: "Last year was all about Gen-AI. This year it's all about reasoning and solving problems." This new era of AI won't just mimic human writing; it will break down problems step-by-step, consider multiple options, and even verify its own answers – much like a human would.

DeepSeek R1: A Harbinger of the Reasoning Revolution

Huang pointed to the Chinese AI startup DeepSeek's R1 model as a prime example of this shift. Initially, the market misinterpreted R1's efficiency, causing Nvidia's stock to plummet by $600 billion. However, Huang emphatically refuted this interpretation, calling R1 "fantastic" and explaining that, as "the first open-sourced reasoning model," it actually consumes "a hundred times more compute" than previous generative models.

This increased computational demand stems from the nature of reasoning itself. Unlike earlier models that primarily relied on pattern recognition and statistical correlations from vast datasets, reasoning models like R1 actively "think" through problems. They generate intermediate steps, consider different solution paths, and even perform self-checks to ensure the accuracy of their answers.

"It breaks the problem down step by step. It asks itself while it's thinking," Huang explained. "It comes up with several different options for the answer. It might actually verify that the answer is correct."

Beyond Generative AI: A Broader Definition of Reasoning

The industry's definition of "reasoning," as exemplified by DeepSeek R1 and OpenAI's o1, focuses on "chain-of-thought reasoning" – breaking down problems into smaller, manageable steps. However, as experts point out, human reasoning encompasses a much wider range of cognitive abilities, including deductive, inductive, analogical, and causal reasoning.

While current "reasoning models" demonstrate impressive capabilities in areas like math, coding, and logic puzzles, they also exhibit "jagged intelligence," excelling at some tasks while failing spectacularly at others. This suggests that while these models are making strides in specific forms of reasoning, they are still far from achieving the flexible, generalizable reasoning abilities of humans.

The Race for Reasoning

The shift towards reasoning AI is intensifying the competition among tech giants. While OpenAI and DeepSeek have been at the forefront of this trend, other companies are quickly joining the race. Anthropic recently released Claude 3.7 Sonnet, which it describes as its first "hybrid reasoning model," claiming superior performance in areas like finance and coding. Amazon is reportedly developing its own reasoning model under the "Nova" brand, aiming for a "hybrid" architecture and cost-efficiency. Even Microsoft, a major backer of OpenAI, is reportedly developing its own in-house reasoning models (referred to as MAI) to compete with OpenAI and potentially offer them to developers.

The Implications of Reasoning AI
The rise of AI models will have broad implications and applications.

OpenAI's research lead Noam Brown suggests that this development could have arrived earlier, with the right algorithms.

The rapid advancement in AI capability are also raising concerns. While many believe that these reasoning abilities are genuine, skeptics such as Shannon Vallor argues that it is "meta-mimicry", where the models are mimicking reasoning, not undergoing the action itself.

Regardless, the progress in AI is clear.