AI's Progress Stalls as OpenAI, Google and Anthropic Hit Roadblocks
Sign up for ARPU: Stay ahead of the curve on tech news.
Despite years of rapid advancements in artificial intelligence (AI), three of the leading companies in the field – OpenAI, Google, and Anthropic – are facing unexpected hurdles in their efforts to develop more sophisticated models, reports Bloomberg.
OpenAI's latest model, known internally as Orion, failed to meet the company's performance expectations. While the model was initially expected to significantly surpass previous versions of the technology behind ChatGPT, it fell short in key areas, particularly in answering coding questions outside its training data.
"Orion is so far not considered to be as big a step up from OpenAI’s existing models as GPT-4 was from GPT-3.5," people familiar with the matter said to Bloomberg.
Google's upcoming iteration of its Gemini software is also facing challenges, according to three sources. Anthropic, meanwhile, has encountered delays in releasing its long-awaited Claude model called 3.5 Opus.
Several factors are contributing to these setbacks. The companies are struggling to find fresh sources of high-quality, human-made training data that can be used to build more advanced AI systems. Additionally, the tremendous costs associated with developing and operating new models are raising questions about whether modest improvements justify the investment.
"The AGI bubble is bursting a little bit," said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, to Bloomberg. "It's become clear that 'different training approaches' may be needed to make AI models work really well on a variety of tasks."
OpenAI is currently working on post-training for Orion, a process that involves incorporating human feedback to improve responses and refine the model's interaction with users. However, the model is not yet at the level OpenAI desires for public release, and the company is unlikely to roll out the system until early next year.
These challenges raise concerns about the validity of the "scaling laws" theory, which posits that more computing power, data, and larger models will inevitably lead to significant advancements in AI capabilities.
The setbacks also cast doubt on the feasibility of achieving artificial general intelligence (AGI), a hypothetical AI system that would match or exceed human intelligence across various intellectual tasks.
"People call them scaling laws. That’s a misnomer," said Dario Amodei, CEO of Anthropic, in a recent podcast. "They’re not laws of the universe. They’re empirical regularities. I am going to bet in favor of them continuing, but I’m not certain of that."
The companies are exploring alternative approaches to address the challenges, including leveraging partnerships with publishers for high-quality data and hiring experts to label data related to specific fields of expertise. They are also experimenting with synthetic data, but this approach has its limitations.
"It is less about quantity and more about quality and diversity of data," said Lila Tretikov, head of AI strategy at New Enterprise Associates. "We can generate quantity synthetically, yet we struggle to get unique, high-quality datasets without human guidance, especially when it comes to language."
Despite these challenges, AI companies are continuing to invest heavily in developing larger and more sophisticated models. However, the rate of progress is uncertain, and the focus is shifting to finding new use cases for existing models.
"We will have better and better models," wrote OpenAI CEO Sam Altman in a recent Reddit AMA. "But I think the thing that will feel like the next giant breakthrough will be agents."