2 min read

Amazon and Anthropic Team Up to Build World's Largest AI Supercomputer

Amazon is collaborating with Anthropic, a leading AI company, to construct one of the world's most powerful AI supercomputers, according to a report by WIRED. This mega-project, codenamed "Project Rainer," will be five times larger than the cluster used to build Anthropic's current most advanced model. Upon completion, Amazon anticipates it will be the world's largest reported AI machine, featuring hundreds of thousands of its latest Trainium 2 AI training chips.

The announcement was made by Matt Garman, CEO of Amazon Web Services (AWS), at the company's Re:Invent conference in Las Vegas. This initiative underscores Amazon's growing prominence in the generative AI space, a sector where it has been making significant investments and technological strides.

Garman also announced the general availability of Trainium 2 in specialized Trn2 UltraServer clusters designed for training cutting-edge AI models. These new AWS clusters are projected to be 30 to 40 percent more cost-effective than those utilizing Nvidia's GPUs.

Amazon's substantial investment of $8 billion in Anthropic this year, coupled with the release of various AI tools through its AWS Bedrock platform, signals a significant commitment to the generative AI sector. At Re:Invent, Amazon also showcased its next-generation Trainium 3 chip, promising a fourfold performance increase over its predecessor. This enhanced performance is attributed to improvements in the chip interconnect, a crucial factor for large AI model development.

"The numbers are pretty astounding" for the next-generation chip, says Patrick Moorhead, CEO and chief analyst at Moore Insight & Strategy, to WIRED. He attributes Trainium 3's performance gains to advancements in the interconnect between chips, a key element for training large AI models.

While Nvidia currently dominates the AI training market, Moorhead anticipates increased competition in the coming years. Amazon's advancements "shows that Nvidia is not the only game in town for training," he notes.

Amazon is also unveiling new tools to improve the reliability and affordability of generative AI models for its customers. These include Model Distillation, a tool to create smaller, faster, and less expensive models; Bedrock Agents, for managing AI agents that automate tasks; and Automated Reasoning, a tool to verify the accuracy of chatbot outputs.

Amazon's own line of chips will play a significant role in making its AI software more affordable. "Silicon is going to have to be a key part of the strategy of any hyperscaler going forward," says Steven Dickens, CEO and principal analyst at HyperFRAME Research, to WIRED. He also notes that Amazon's custom silicon development has been ongoing for a longer period than its competitors.

Garman highlighted the growing number of AWS customers transitioning from AI experimentation to building commercially viable products and services using generative AI. He also emphasized that many customers prioritize cost-effectiveness and reliability over pushing the boundaries of AI technology.