At the ongoing Facebook F8 conference, Facebook and Qualcomm have announced a collaboration to support the optimization of Caffe2 and the Snapdragon Neural Processing Engine (NPE) software framework.
Caffe is a deep learning framework developed by Berkeley AI Research and community contributors. It was released under the FreeBSD license, which allows for commercial use, modifications, and distribution of software as long as the BSD copyright notice is included. Caffe2 is the evolution of the open source Caffe framework, and it allows for more flexibility in organizing computation.
Facebook and Qualcomm’s collaboration centers around the increased reliance on machine learning for several tasks. Conventionally, machine learning technology involved super-fast data processing applications running on server farms and supercomputers, turning large data sets into useful actions. Qualcomm aims to make machine learning mobile by putting it on our personal devices, aiding in scenarios where apps can frequently access machine learning in their own ways.
On-device machine learning is made possible by the Qualcomm Snapdragon NPE which does the heavy lifting needed to run neural networks more efficiently on Snapdragon devices. This would leave developers with more time and resources to focus on creating user experiences.
With Caffe2’s modern computation graph design, minimalist modularity, and multi-platform flexibility, developers will now have greater flexibility to design deep learning tasks like computer vision, natural language processing, augmented reality, event prediction and more!
As Qualcomm notes, one of the benefits of the Snapdragon NPE is that a developer can target individual heterogeneous compute cores depending on the power and performance demands of their application and deliver optimal performance accordingly. Qualcomm claims that the Snapdragon 835 delivers up to 5x better performance when processing Caffe2 workloads on the embedded Adreno 540 GPU as compared to the CPU. The Hexagon Vector eXtensions (HVX) in the Hexagon DSP offers even greater performance and energy efficiency.
The NPE includes runtime software, libraries, APIs, offline model conversion tools, debugging and benchmarking tools, sample code, and documentation. It will be available this summer to the broader developer community.
What are your thoughts on Facebook and Qualcomm’s collaboration towards on-mobile machine learning? What use cases do you foresee? Let us know in the comments below!
0 comments:
Post a Comment