Apple’s strong privacy stance is costing it the artificial intelligence (AI) race – but onboard AI chips can help it come back
by Ahmed Khalil, Lux Research
Apple is notoriously regarded as being well behind other technology powerhouses, such as Google, Amazon, Facebook, and NVIDIA, when it comes to artificial intelligence (AI). This struggle is in part due to the company’s inability to attract top talent in the machine learning space, who opt to work for players in the space who are more vocal about their AI efforts. These more vocal players have open-sourced their proprietary deep learning libraries in order to excite developers about their companies’ AI efforts; Google has released TensorFlow, Amazon has released DSSTNE, NVIDIA has released cuDNN, and Facebook has released a set of extension modules for Torch. Apple has realized this and has made an active effort to revamp its AI initiative in the past year through a series of acquisitions of startups dealing in the space, including Perceptio, VocalIQ, Emotient, Turi, and, most recently, Tuplejump.
Additionally, in an effort to appease developers and mobilize its public AI efforts, Apple has introduced new machine learning tools to developers. For instance, the company has opted to keep Tuplejump’s FileDB project as an open-source platform. The FileDB project, which was Apple’s primary interest in the startup, is an open-source distributed columnar database with machine learning capabilities that serves to help analyze complex streaming data. Apple also recently released an API for Siri that allows developers to integrate Siri’s natural language processing (NLP) capabilities into third-party applications. The company has also released basic neural network subroutines (BNNs), a collection of functions that allow developers to implement and run neural networks in their applications. The inherent shortcoming in BNNs, and why the term “basic” was used to describe the technology, is that Apple does not provide a way to train the neural networks, but rather only use already-trained networks. This is almost certainly due to the massive computational costs of training neural networks.
But the issue of privacy concerns is perhaps an even more pressing obstacle preventing Apple’s AI efforts from flourishing. The company prides itself on protecting user privacy using end-to-end encryption and has stated that it aims to keep as much computation involving private information on personal devices rather than on the cloud. To this end, the company has begun investigating the use of differential privacy, which would allow the company to glean insights from large datasets without compromising the data of any single user. However, research into this solution is relatively young, and there are obvious advantages to having user-specific information at hand, as Google and Facebook easily demonstrate. One startup that might help these efforts along, however, is Snips, a startup that assembles digital profiles for users while keeping personal information as private as possible; all of the processing of personal information is performed either locally on the device, or using secure computation techniques, such as the aforementioned differential privacy.
Moreover, Apple’s desire to enable deep learning on devices without having to handle private information on its servers leads to a natural consideration of using AI accelerators to perform deep learning computations on embedded systems. Deep learning on embedded systems is an area that has been gaining much traction among technology giants. IBM recently announced that its “TrueNorth brain-inspired computer chip can efficiently implement inference with deep networks that approach state-of-the-art accuracy on several vision and speech datasets.” There are a number of others who are also investigating this idea. For example, Intel recently acquired Movidius, a startup known for developing low-power vision processing units (VPUs) that are equipped with a deep learning software framework. Movidius is notable for its involvement in Google’s Project Tango, in which it was responsible for empowering smartphones with the ability to perform image-recognition tasks using deep learning directly on the device.
If Apple is intent on maintaining its strict policy of ensuring user privacy while also propelling its deep learning efforts forward, the logical solution seems to be the acquisition of a startup that deals with developing processors that allow deep learning computations to be performed locally on compute-constrained devices. Given its string of acquisitions in the space during the past year this, indeed, is almost a certainty as the company advances its AI initiative in the coming year. This is especially true since Apple has shown interest in entering the race to develop a self-driving car, and developing deep learning on an embedded system would go hand-in-hand with these efforts; NVIDIA has already deployed its Drive PX 2 platform, and Intel is most likely looking to leverage the newly acquired Movidius to develop its own self-driving platform. Startups to keep on the lookout for include KnuEdge, Teradeep, and Pilot AI Labs, which are working on implementing deep learning on embedded systems. There are also a number of research institutions, such as the University of Rochester and UCLA, developing solutions in this area that may also be of interest.