Nvidia comes with Jetson TX1 development kit based on X1 soc

Spread the love

Nvidia has released a module for a machine learning system, the Jetson TX1. The 50x87mm module has been developed for GPUs based on the Maxwell architecture. The company also comes with two new high-performance cards in the Tesla series.

The Jetson TX1 has been developed to run artificial neural networks for computer vision, machine learning and navigation, among other things. For this, the gem offers 1 teraflops of computing power thanks to a GPU with 256 Cuda cores based on the Maxwell architecture. The CPU of the module is a soc with cores based on the ARM A57 architecture.

With the GPU, the TX1 can encode and decode 4k video and read a camera with 1400 megapixels per second. The internal memory consists of 4GB lpddr4 along with 16GB emmc storage. A remote connection can be obtained via WiFi, Bluetooth or the Gigabit Ethernet interface. The whole thing runs on Linux for Tegra.

The aim of the modules is mainly to integrate into robotic systems, such as partly autonomous drones. Integrating in other types of robots is also obvious, Nvidia writes in a message on its site. The Jetson TX1 SDK for visual computing includes several Cuda libraries and frameworks. There is also support for Opengl 4.5, Opengl ES 3.1 and Vulkan and for Cuda 7.0, where the gpu can be used as a general purpose processor.

The Jetson TX1 Developer Kit will be available Thursday for $599, with the educational version set to cost $299. The module will be available in early 2016 for a price of $299 each with a minimum order of 1,000 units.

In addition to the TX1, Nvidia also announced two new Tesla accelerators, intended for artificial intelligence calculations, among other things, the Tesla M4 and M40. The Tesla M4 card uses little energy, between 50 and 75 Watts, and is designed with data centers in mind. The whole has 1024 Cuda cores, 4GB gddr5 memory, a bandwidth of 88GB per second and can handle peaks up to 2.2 teraflops.

The Tesla M40, on the other hand, is a lot less energy efficient and uses up to 250 Watts with a processing power peak of 7 teraflops. The accelerator achieves this by using 3072 Cuda cores, 12GB gddr5 memory with a bandwidth of 288GB per second. Both cards are based on the Maxwell architecture. The Tesla M40 will be available at the end of 2015, the M4 at the beginning of 2016. No price is known for either accelerator.

Nvidia Tesla M4 and Tesla M40

You might also like
Exit mobile version