TensorFlow is a symbolic math software library for dataflow programming across a range of tasks. It’s typically used for machine learning applications such as neural networks. The software library was developed by the Google Brain team for internal use at first, but then the company released it under the Apache 2.0 open-source license back in late 2015. This required resource heavy hardware to perform anything meaningful, but back in May of last year, the company launched TensorFlow Lite for mobile machine learning and just recently they added support for GPUs on Android.
We’re getting to the point where AI, machine learning, and neural networks are virtually everywhere in our daily lives. These terms, especially AI, tend to be used quite broadly, but many of these systems need a way to learn and many are using TensorFlow to do just that. So being able to use TensorFlow Lite on mobile devices has been something a lot of application developers wanted. The thing was, running these machine learning models on mobile devices was very resource intensive even when converting to a fixed-point model.
Developers have been asking Google to add GPU support for TensorFlow Lite and it’s now available via a developer preview. This new feature gives the developer the option to speed up the inference of the original floating point models (instead of converting to a fixed-point model) which removes the extra complexity and potential accuracy loss of quantization. The new developer preview will support OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. If TensorFlow Lite doesn’t detect either of these then it will simply fall back to the CPU inference for parts of a model that are unsupported.
Google says they will continue to add additional ops and improve the overall GPU backend offering with a full open source release planned for later this year.
Via: jamesonatfritz Source 1: Google Source 2: Google
0 comments:
Post a Comment