Nvidia announces H200 GPU with HBM3e memory

Spread the love

Nvidia has announced the H200 GPU that could play an important role in the future of deep learning and language models, such as OpenAI’s GPT-4. It is the company’s first GPU with HBM3e memory.

The Nvidia H200 is based on the Hopper architecture and replaces the current H100 GPU. The HBM3e memory should provide higher speed and more capacity. Nvidia speaks of 141GB GPU memory with a bandwidth of 4.8Tbit/s, which is twice the capacity of its predecessor. According to the chipmaker, the H200 GPU also has 2.4 times as much bandwidth as the Nvidia A100.

According to Nvidia, the HGX H200 GPU doubles the inference speed of the large language model Llama 2 with 70 billion parameters compared to the H100 GPU. GPT-3 has an interference rate of 175 billion, which is more than 1.5 times as fast as its predecessor.

In the second quarter of 2024 the H200 is released. The GPU is also available in the GH200 Grace Hopper ‘super chip’ was announced in August. The H200 should enable researchers to perform complex AI tasks through the ability to process terabytes of data at a rapid pace.

You might also like