Amazon Web Services (AWS) is spearheading an extensive supercomputing endeavor known as Project Rainier, designed to dramatically enhance Anthropic’s AI development. This ambitious infrastructure will feature an expansive cluster composed of hundreds of thousands of accelerators and is anticipated to be operational later this year across various US locations. One notably large site in Indiana will comprise thirty data centers, each spanning 200,000 square feet and collectively consuming approximately 2.2 gigawatts of power.

Project Rainier distinguishes itself by utilizing Amazon’s Annapurna AI silicon instead of GPUs, marking the largest deployment of this technology to date. AWS intends to facilitate unified model training across all infrastructure, offering unprecedented scale and performance.

As AWS is significantly invested in Anthropic, with an $8 billion backing, this venture underscores a pivotal step in securing a competitive standing against other AI giants like OpenAI and xAI.

Despite withholding full project details, AWS confirms Anthropic’s preliminary access to some of the system’s compute capabilities. The project’s scale remains vast, and its ultimate size could mirror multi-site architectures akin to Stargate, though it’s likely to utilize a scalable model similar to Project Ceiba.

Project Rainier’s core computing unit is the Trainium2 accelerator from Annapurna Lab. The forthcoming Trainium3 promises further enhancements, aligning with AWS’s commitment to harnessing cutting-edge AI technologies.

The incorporation of Trainium3, featuring advanced fabrication processes and elevated efficiency, could potentially double the current performance, integrating seamlessly into project expansions.