Amazon shows Graviton4 SoC with 96 Arm Neoverse V2 cores and Trainium2 AI chips
Amazon Web Services is launching two new data center chips. The company is introducing its Graviton4 processor, which has 96 Arm Neoverse V2 cores. Amazon also shows Trainium2, a chip intended for training AI models.
Graviton4 performs according to Amazon up to thirty percent better than its current generation of Graviton3 socks. The chips have fifty percent more cores than their predecessors, bringing the total to 96. The manufacturer is again using the Arm architecture, this time with Neoverse V2 cores. The memory bandwidth has also increased by seventy percent, the manufacturer says. The chips have twelve DDR5 channels with support for speeds of up to 5600 megatransfers per second. Amazon does not say which process the chips are made by. The previous Graviton3 chips were produced on TSMC’s 5nm node.
Graviton4 SoCs will be available on AWS in EC2 R8g instances, which are optimized for high memory bandwidths. The tech giant says that users can use the chips for databases and ‘large analytics workloads’, among other things. The instances will be available immediately as a preview. General availability will follow ‘in the coming months’. Amazon does not mention a concrete release date.
Furthermore, AWS is showing its Trainium2 chips, which are intended for training AI models, such as foundation models and large language models. According to the manufacturer, Trainium2 is up to four times as fast as its predecessor. They would also be twice as energy efficient. The chips should enable developers to train AI models faster and cheaper. AI company Anthropic has announced that it will make models with Trainium2 chips.
Amazon also announces that, in addition to offering its new AI and data center chips, it will continue to collaborate with Nvidia. The company will offer enterprise customers instances with Nvidia’s H200 GPUs, among other things. The company will also offer Nvidia’s GH200 Grace Hopper superchips. These are chips that combine Nvidia’s own Grace CPUs based on Arm with an H200 GPU on a single module. Amazon will also host an AI supercomputer with GH200 chips for Nvidia’s own R&D team. Microsoft also previously announced a deeper partnership with Nvidia after introducing its first proprietary Maia100 AI chips for Azure.
Amazon Graviton4 (left) and Trainium2 chips. Source: BusinessWire