Google taps Movidius to bring machine learning to the handset

1011

Movidius, a vision processing partner in Google’s Project Tango modular handset platform, is expanding its relationship with the search giant into artificial intelligence, aiming to bring machine learning out of the massive server and into the handset.

The Irish company’s Myriad processors are used in Tango and support 3D and HD streaming at very low power levels, supporting new vision and motion tracking applications. That enabled developers to create user experiences which included augmented reality and 3D spatial awareness. With the fruits of the latest collaboration, those smartphone and wearables experiences could become significantly more intelligent.

As CEO Remi El-Ouazzane told EETimes, the companies are working to deploy Google’s neural networking engine on a Movidius computer vision platform, so that it can be used locally in a device, even when that is not connected to the Internet. Google will buy the smaller firm’s vision SoCs and license its software development environment.

Deep learning models will be extracted from Google’s data centers and run on the mobile devices. Movidius’s vision processor will be able to use these to “detect, identify, classify and recognize objects, and generate highly accurate data, even when objects are in occlusion,” said El-Ouazzane. Google and its machine intelligence group in Seattle will develop commercial applications based around this deep learning platform, an effort which will challenge IBM’s Watson initiative in some areas.

Read the source article at Rethink Wireless