Soon, smartphones will not only take snapshots and videos but also recognize the objects pictured. Google is working with a Silicon Valley chip designer to let mobile devices do that kind of heavy computing internally rather than relying on remote data centers.
Alphabet Inc.’s best-known unit and semiconductor startup Movidius on Wednesday announced a collaboration aimed at bringing technology described as deep learning to handsets.
The search giant has already demonstrated image-recognition capabilities with Google Photos, a mobile app and image hosting service that stores and analyzes images uploaded from smartphones. Users can search through the photos by typing in the names of objects, like flowers or houses or mountains, or use a photo of a face to find others that contain it. But uploading images takes time, and searching through them from a phone depends on a wireless connection that may not always be available.
With built-in image recognition, smartphones could identify objects in real time for a variety of applications, like identifying people to authorize transactions, aiding blind people and translating signs.