Editors note: This is part 4 of a 5 series of 5 articles that AI Trends is publishing based on our recent webinar on the same topic presented by Mark Bunger, VP Research, Lux Research. If you missed the webinar, go to www.aitrends.com/webinar. The webinar series is being presented as part of AI World’s webinar, publishing and research coverage on the enterprise AI marketplace. Part 1 appears here and Part 2 appears here and Part 3 appears here.
Another thing that Google is doing is buying a lot of chips – vision processing unit (VPU) chips, which are specialized in visual processing from a company called Movidius.
Editors note: Intel Acquires Movidius
One of the things that they’re working on is image recognition for smartphone-based AI with “Tango” which is something that you can think of as a connector for vision and GPS. In other words, it’s like a camera that is capable of not only seeing images but understanding their three-dimensional structure. If you read about what Movidius says about their own platform, they allow your network to run in an embedded environment such as smart cameras, drones, virtual reality headsets and robots. They’re moving this power to the network edge and they’ve even developed a modular deep learning accelerator that fits on a USB stick. When things fit on USB sticks, they’re obviously getting pretty close to the edge.
Google is not alone – there are a lot of other companies working in this space as well – NVIDIA is one. NVIDIA has been working on graphical processing chips for a long time as well as in gaming and other types of visual applications. They’re a leader in that domain but now they’re applying their expertise and experience to the types of visual analytics that drones, robots and cars need to do, in real-time and on-the-fly.
NVIDIA’s Jetson platform is about the size of a credit card. It’s got software that basically applies “CUDA” – their neural network development platform – put onto these small boards and then ideally, they’re deployed into very small applications, like vehicles.
I saw an example of such an application at Maker Faire in San Francisco a few months ago. They had a robotic card, that I believe was developed by MIT, in a very small vehicle that was running on a treadmill. It had a video screen in front of it providing an dynamic image of a road with obstacles that would pop up. The vehicle was making its own turning and braking decisions, similar to a driving simulator games. It was a pretty impressive demonstration for what amounted to basically, a very fancy toy.
Another company in this space is Qualcomm. They make a lot of the chips that go into phones and the like. Qualcomm’s Zeroth is a cognitive computing platform – with hardware and software, and with dedicated chips, as well as running on things like Snapdragon 820. They’re targeting the same types of applications that we’ve discussed earlier – smartphones, cameras, vehicles, drones, security, virtual and augmented reality.
It’s my view that Qualcomm is also looking at Zeorth in thinking about user experience, something that is really important in AI. Going back to the original definition of AI, the “Turing Test”, it’s the machine’s ability to convince a human that it is also human. Now, we don’t need to believe that our smartphones are in some way human, we do want things like Siri to behave pretty close to human-like. They’re certainly trying to emulate that and when they don’t, it becomes a bad user experience. Similarly, with identifying objects, avoiding collisions with things, we want our machines, in a lot of cases, to look, act and think a lot like humans. Much of that is based on how these platforms interact with us personally, through our voice, through our perception and through their reasoning. They need to convince us that basically they are as smart as we are or, for example, we’re not going to be able to let them drive our cars for us.
Those are a lot of the big companies that are in the space today but, there’s a lot of other effort taking place too, that I haven’t mentioned. That could be because it’s maybe more academic or it’s a little bit earlier-stage. There’s SpiNNaker and there’s IBM’s TrueNorth and many more that we could examine, but let’s move on to some of the start-ups to watch.
Here’s a short list but it’s one that’s growing quickly. I think that if we check back in a few months we’ll see more of these things as they get spun out of academic labs.
- KRTKL, pronounced critical, they’re making a platform called “snickerdoodle” for robotics that interfaces motors, sensors, Wi-Fi and Bluetooth for example.
- New Edge is a company that’s led by the former head of NASA. They’ve raised about a $100 million to date and have 100 employees.
- MIT is developing a chip called Iris. Again, doing deep learning for speech recognition, face detection, object identification; a lot of the visual tasks that we’ve been describing throughout this presentation.
- Horizon Robotics is another company with, in this case, vehicle safety and software in vehicles. They’re lead by the former head of Baidu’s Institute of Deep Learning. It’s a Chinese company with very, very big ambitions for where they want to go.
- Nirvana is a company that we recently interviewed so you can check them out through our podcasts. They’re working on neuromorphic chips and because there’s a long development time for that they’re also looking at software-based applications for in the meantime.
- Finally, there’s a really small company called Terra Deep that we’ve also looked at. They run on conventional hardware and, in fact, that’s the point. Basically they want to be able to retrofit AI applications onto, say, an old laptop with a webcam and give it some ability to do things like detecting objects and such.
So as I mentioned, there’s a lot going on in this space and where it goes from here is important for not only the company’s own space but those of us who are going to be using these devices as well. Our self-driving cars, our personal drones or even just our smartphones and the security that they can provide us are all important developments here.
So to go back to the past, around the same time that the movie “Colossus: The Forbin project” came out, this ad came appeared. It’s an “electronic computer brain” you could buy from the back of a comic book, for $5, which is probably the equivalent of probably $50 today. It purports to do a lot of the things that we still are hoping that our AIs will do for us. It talks about predicting your future and checking debts, your check book and a lot of other stuff that I’m really not sure that this little plastic device could do. In any case, our dream of electronic computer brains lives on whether they’re big or small.
by Mark Bunger, VP of Research, Lux Research