Nvidia announces A100 accelerator with 80GB HBM2e
Nvidia has announced a version of its A100 high performance computing accelerator that features 80GB of HBM2e. That’s twice as much memory as the A100 version announced in May.
The Nvidia A100 80GB offers a memory bandwidth of more than 2 terabytes per second, according to Nvidia. The company shows the performance gains over the A100 with 40GB HBM2e in a number of hpc benchmarks to give an indication of how applications can benefit from doubling the memory. For example, the performance of FP16 calculations for deep learning recommendation models would have tripled and the performance of the Quantum Espresso suite would be 1.8 times higher thanks to the extra memory.
Nvidia offers the option to build eight of the accelerators into its DGX A100 systems and four into the new DGX Station A100 workgroup servers. Such a DGX Station A100 system thus offers the possibility, for example, to supply up to 28 different GPU instances where multiple users can run parallel applications.
According to Nvidia, other manufacturers such as Atos, Dell, Fujitsu, Hewlett Packard Enterprise, Lenovo and Supermicro will launch their own systems with HGX A100 boards in early 2021, which can accommodate four or eight A100 80GB accelerators.
Nvidia DGX Station A100