Last month, Microsoft said it was integrating FPGA with Azure and Bing and is continuing to turn its attention to FPGA. Brainwave is the next step of the evolution. The company says its benchmarking shows the platform can sustain 39.5 Teraflops without the need for batching on a large gated recurrent. In its early tests, Microsoft was using the system on Intel Statix 10 FPGAs. Microsoft has often talked up the potential of FPGAs for machine learning and parallel computing. The fact the chips are easily programmable and efficient makes them good candidates. To extend this compatibility, Microsoft has synthesized DPU or DNN processing units into its FPGA chips. The company expects to be able to boost its research and near real-time processing capabilities: “Project Brainwave achieves a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency.”

Availability

Microsoft says it is developing Brainwave to be usable in the Azure cloud platform. Like its previous FPGA integration, the company says users can get direct results from the new solution, including on Bing. Brainwave will allow “record-breaking” performance speeds, allowing “industry-leading” deep learning and AI speeds. The system currently works with Google’s TensorFlow and Microsoft’s CNTK. Microsoft has not offered a specific date for Brainwave’s launch.

Microsoft Announces Brainwave Cloud FPGA System for Real Time AI - 44Microsoft Announces Brainwave Cloud FPGA System for Real Time AI - 38Microsoft Announces Brainwave Cloud FPGA System for Real Time AI - 71Microsoft Announces Brainwave Cloud FPGA System for Real Time AI - 98Microsoft Announces Brainwave Cloud FPGA System for Real Time AI - 7