Microsoft wants to accelerate deep learning with FPGAs in Brainwave project

Spread the love

Microsoft has announced a project called Brainwave as part of the Hot Chips conference. The company claims to be able to perform deep learning with very low latency on FPGAs, for tasks such as search and video analysis.

Microsoft describes Brainwave as a deep learning acceleration platform. Due to the low latency, the company says it can deliver ‘real-time artificial intelligence’, which means that incoming requests must be able to be processed immediately. Microsoft does not use batching for this. To achieve this, Microsoft leverages its large stock of FPGAs in data centers. It already announced last year that it wanted to use the reprogrammable chips for deep learning.

Microsoft wants to increase the speed of the FPGAs in several ways. The Register, which was present at the presentation, writes that the machine learning models are stored in the memory of the fpga, so that the working memory can be used as a buffer for incoming and outgoing data. In addition, the design of the fpga has been optimized so that a constant stream of data can be processed. Finally, the chips can be switched to form a sort of pipeline for commands.

According to Microsoft, its proprietary approach is more flexible than using hard-coded deep learning systems, which can deliver higher performance, but require pre-determination of data types and operators to use. Microsoft says it now has support for running its own Cognitive Toolkit and Google’s machine learning library Tensorflow. More options should be added in the future.

Intel reports that Microsoft has chosen its 14nm Stratix 10 fpgas for the Brainwave project. In the future, the Redmond-based company wants to offer Brainwave to Azure customers. It plans to release more information about this in the future.

Source: Microsoft Presentation

You might also like