Nvidia presents Tesla T4 accelerator with Turing GPU
Nvidia has announced the Tesla T4 accelerator, which is equipped with a Turing GPU with Tensor cores and 16GB GDDR6 memory. The card is intended for use in data centers where deep learning is used.
The Tesla T4 is a PCI-e-x16 plug-in card equipped with a Turing GPU with 2560 cudacores and 320 Tensor cores. Nvidia makes no mention of RT cores, so it seems to be a modified GPU where the cores for ray tracing are absent. The Tesla T4 accelerator has a memory bandwidth of 320GB/s and consumes 75 watts.
Nvidia gives few details about the GPU, but because of the number of cudacores it seems to be a small variant of the Turing GPU, such as the TU106 used in the RTX 2070 video card. That variant of the gpu has 2304 cudacores and 288 Tensor cores, but also 36 RT cores. The Quadro RTX cards have a minimum of 3072 cudacores and are also equipped with RT cores.
According to Nvidia, the card is suitable for training machine learning models. The GPU maker also comes with TensorRT 5, an inference optimizer and runtime engine with support for the Tensor cores of the Turing GPU. Nvidia classifies the whole as TensorRT Hyperscale Platform.
In the announcement, the company says that the Google Cloud Platform will soon support the Tesla T4, and server manufacturers such as Cisco UCS, Dell EMC, Fujitsu, HPE and IBM Cognitive Systems also say they support the TensorRT Hyperscale Platform.
Nvidia has not disclosed a price for the Tesla T4 accelerator. It is the successor to the Tesla P4, which was announced in 2016 and can currently be found in the Pricewatch for around 2500 euros.