AI Gets Edgy: How new chips and code are pushing artificial intelligence (AI) from the centralized cloud out to network nodes – Part 5

453

Editors note: This is the final part of a series of 5 articles that AI Trends is publishing based on our recent webinar on the same topic presented by Mark Bunger, VP Research, Lux Research. If you missed the webinar, go to www.aitrends.com/webinar.  The webinar series is being presented as part of AI World’s webinar, publishing and research coverage on the enterprise AI marketplace.  Part 1 appears here and Part 2 appears here and Part 3 appears here and Part 4 appears here.

Q:  “Who will win; startups or incumbents?”

Mark:  “That’s a really tough question and I always tend to favor incumbents in a lot of spaces just because they tend to be more agile, more flexible and so on. So while I do think it’s still a little bit too early to call, in this case I am actually leaning more towards the incumbents. I think that just the fact that they are very aware of what the future could look like if they don’t have a very dominant position in the space and the fact that…like in Intel space, their spending almost $17 billion just on one company means that even if startups get to pretty big scale where they could be bought for $17 billion, the incumbents have so much money and it’s so critical to their survival that I think that even the startups that do get to scale and do start making some advancements will pretty quickly get snapped up by those incumbents.”

Q:  “What do you see as the first applications that will use this technology?”

Mark:  “A lot of the…in fact, almost all of the applications we’re seeing on mobile devices are related to visual tasks. So the first ones seem to be around facial recognition which is interesting because we think every face is unique and there are a lot of things that make it hard for…it’s hard to trick people basically with a picture of somebody. We’re pretty good at recognizing the facial features and remembering things and we think of it as a pretty complex task. But I think because of the use case of most facial recognition where you basically have a head-on good view of somebody’s face. Computers can actually do a lot of that recognition even more accurately than we can because one, they can make much more precise measurements of facial features. So the distance between your eyes and other types of things, and they can scan a much, much, much larger data set more quickly than we can. So we can recognize people we’ve seen before but we can’t scan, say, a yearbook of everybody on the planet and pick out the people that are most like the person that we’re staring straight at. We can’t do that very quickly.

I think that facial recognition is probably one of the earliest applications because, again, it just fits a lot of the use cases that we see. The second thing I would say is probably around collision avoidance and, again, some of the autonomous driving and autonomous drones, other Internet of things in motion types of applications that we’ve seen there because you don’t have to be as specific about what you’re seeing. You just have to understand the geometry pretty well. A lot of the things that are going to hinder adoption of things like self-driving vehicles…the two collisions that I mentioned earlier are just examples. There are a lot of sometimes funny stories about what cars think they’re seeing. So I’ve heard about cars coming up to a steep on-ramp and thinking it was a wall and slamming on the brakes. So you’re going from…sorry, off-ramps. So being…going down the highway, trying to get off the highway and thinking that that incline is actually a wall and you need to stop immediately. So I think we’re in this part of the technology evolution where we’re going to hear a lot of weird and funny stories. Sometimes they’re going to be sad and dangerous and if the AI deployments, some of the things we’ve talked about, don’t make those life-threatening, basically, then we won’t have those technologies. They’ll be delayed a long time. So I think people are very keen to get those problems solved most quickly.”

Q:  “What is happening on the policies in AI?”

Mark:  “On the policies? So I’m not sure what the question means regarding policies but I’m going to guess that they mean things like…well, for example, with self-driving cars, there are policies about auto driver or autopilot mode and not…there are other people that are thinking about policy in the sense of “Is AI an existential threat for humans? Is it the Colossus story where it takes over the weaponry and gets tired of seeing humans and destroys us?” That’s a recurring theme in a lot of fiction and hence a recurring theme in a lot of people’s fears about the future that drive policy.

In the former case, the policy has been very scattershot. You’ve got some in the U.S., some U.S. States and around the world, very, very different types of policy rules about where AI can or can’t replace a human and what role the human needs to be doing how actively basically. Do they need to be monitoring the situation? So this is obviously in self-driving cars but also in other areas like surgery and operating machinery and things like that.

Those policies, I think, are always going to be a little bit behind events. In other words, the events will drive the policies. We’ll try and put policies in place but because they’re so patchwork and there’s no way to regulate, in advance, a use case that you haven’t, yourself, envisioned and people are very creative and they envision use cases and then just go try them, I think that we’re going to see policy trail that.

On the other, broader AI policy, I think that, to be honest, most regulators are, again, just way behind on that and so we’ll see essentially policies that come up as more principles like the Open AI Foundation that Elon Musk and others have started to basically start getting AI development as open as possible so as many people as possible can look at it as opposed to the more proprietary systems. Basically, the types of things we’ve been discussing today. They want to get more of that in an open forum so the policy can form in a better educated, better informed way.”

Q:  “In the long run, what use cases do you see being on the edge which are today seen as cloud scenarios?”

Mark:  “Yeah, it’s interesting because if you go back even just a few years, you have a lot of car makers thinking of…and telecom providers too. So AT&T and some of telecom providers essentially assumed that intelligent transportation systems would all be highly centralized. So you’d have cars that were communicating the conditions that they were experiencing in real time back to some central server that then would optimize traffic or tell the car to slow down because maybe there was a patch of ice or something like that. And those ideas, I think, are almost completely gone. I think that most people today think that if a car can’t navigate on its own without having to check in with a server somewhere and essentially act as a driver, then it’s not viable as a self-driving car or even as an intelligent car.

There are a lot of reasons for this. We’ve also touched on this in looking at what’s called technology shear where you have, say, a fast moving technology which the edge technologies tend to be and then a slow moving technology, a slow evolving technology which the centralized ones tend to be just getting so out of alignment that you can’t rely on having a robust connection between the two. So I think, to get to the answer, I think ultimately everything will go to the edge. We’ll be shocked at what can be done at the edge. And the things that are really, really centralized, you know, will be a little bit like what’s centralized today where it’s just more practical to have certain types of data in one location so that everybody can access them in real time but where they’re generated and their needed on the edges, I think that’s going to be lots and lots and lots of cases. So again, our smartphones are a very good example there. I recall when smartphones first came out and people would say things like “Why would I want to check email on my phone?” And now, obviously, we do so many things on our phones that we didn’t ever do even at all. Those are the types of scenarios that I’d start…I am and we are, I should say, starting to try to envision for computing on the edge.”

Q:  “How well are the incumbents able to monetize these AI capabilities today and are incumbents funding AI startups?”

Mark:  “Oh, yeah, you definitely see a lot of these companies have partnerships like in the case of Intel. They bought a company with a lot of money and in Google’s case, they’re buying chips from one of the startups. So yeah, there are a lot of ways that the funding happens and it’s in a range from acquisition to buying components to partnerships and joint development agreements and licensing and everything. So there’s a pretty broad spectrum of how they interact. In fact, that’s probably something that we should map. So stay tuned. We’ll map that stuff out in some future research.”

Q:  “Down the line, do you see speech recognition moving to the edge or are there fundamental reasons why it’ll remain on the Cloud?”

Mark:  “No, no. I mean, I think you actually do see different approaches with this already like with Apple and Google and one of the reasons, I think, that they have centralized a lot of processing in the past is to train the system on current tasks but also to create just a humongous database of human speech. So I know Google’s done some pretty interesting research on speech patterns that had nothing to do with recognizing speech but just because they had a big data set of real human speech, they could look at other types of things that are more in the realm of linguistics and communication and things like that.”

Q:  “And finally, when will my AI be as smart as my dog?”

Mark:  “Dogs are interesting creatures. In some cases, I wonder when humans will be as smart as dogs. So I know there’s a little bit of humor in the question but I do want to maybe, hopefully say something useful and interesting about it too. So dogs and we are actually really good at a lot of different types of tasks. And the types of things that dogs are great at that I think AI will ultimately be able to do but can’t yet today are things like calculating a trajectory of a Frisbee or a ball. That’s a really complicated task and so how can you just throw something and your dog goes and leaps through the air and catches it? There’s so much processing going on. And when we see the Boston Dynamics robots, the creepy dog robots, they’re so far away from being able to do anything like that that I have a hard time imagining it. One of the reasons for that is because they can’t move that way. So I think until the actuators and the sort of robotic muscles, if you will, enable the machine to have that type of grace and agility, they’ll never learn to do that task. There’s a lot going on there in terms of just feedback from running over rough ground and things like that.”

Smelling is another area where dogs are phenomenally high performing devices. You’ve probably heard about dogs that can smell cancer and dogs, obviously, do a lot of the things forensically. That’s an area where I think that AI could probably catch up a lot sooner. And being able to do things like detecting very, very small quantities of certain molecules and certain mixtures of molecules very accurately I think is a good near term application.

But then I think a lot of…this is almost what I was saying at the end about the user experience. A lot of our experience of intelligence is how things interact with us. And so dogs, in particular, and humans have emotional bonds because we feel like there’s love and understanding and there’s even humor and silliness and playfulness. Machines obviously can’t do that yet but there are a lot of companies in this area called affective computing. Not effective, or “e”, but starts with an “a” which is basically emotional computing. And I think it would be really interesting to see how those toys that they’re working on right now actually do develop to be… become some kind of emotional companions. There are a lot of movies about that stuff too, but I can see that being a nearer term area than we would think.

by Mark Bunger, VP Research, Lux Research