Cyclops Approach to AI Self-Driving Cars is Myopic

392

By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

I recently spoke at an Autonomous Vehicles event and about a week or two beforehand had spoken at another event on Self-Driving Cars. At the first event, there were some fellow speakers arguing that LIDAR is the king and that any self-respecting self-driving car should be “going all the way” with LIDAR (see my column covering the LIDAR sensory technology and the controversies about it). Meanwhile, at the second event there was a push by the keynote speaker that Cameras need to be the king of self-driving cars and that LIDAR is over-priced, over-hyped, and just not needed.

This would seem to be quite a confusing predicament to have “experts” claiming that one sensory device is best over another on self-driving cars. The LIDAR-bashing crew showed examples of LIDAR images and challenged the audience to see if it could make any sense out of what the image showed. Audience members were not able to figure out whether a curb existed on the side of the road, and nor whether a sketchy image of a figure was actually a pedestrian or maybe something else like a street post. This seemed to be a very convincing example that LIDAR is no good, it’s washed-up, it’s got low resolution and ambiguities that make it essentially worthless.

Then, the speaker showed camera-produced street images and asked the audience to see if it could make sense of what was shown. Easily, the audience yelled out that it saw a pedestrian, it saw a curb that was about 4 inches tall, and so on. Wow, this was ample proof that the camera was by far a winner over LIDAR. How could our eyes deceive us?  We had seen directly and without any intervention that the camera is better than LIDAR.  Case closed.

I was taken aback at such an obvious display of argumentative trickery. Imagine that I was trying to convince you that you should drink milk and that it is overwhelmingly superior to drinking orange juice. I then show you a picture of a person that has been drinking milk and tell you they have all the Vitamin D that they need and their bones are as strong as an ox. I then show you a picture of a person that has been drinking orange juice and they appear weak and brittle.  Forget about orange juice, I tell you, not sufficient Vitamin D, so take milk instead.

My answer to this is: Cyclops. Yes, you heard me, I said cyclops. Remember the Greek mythological creature that had a single eye in the center of his forehead? I am calling these efforts to claim that one particular sensory device should be used over another sensory device as the Cyclops Self-Driving Car, and for which I assert that these “experts” are employing a false argument based on the assumption of mutual exclusion.  The inherent assumption is that we must for some reason choose between whether to use LIDAR or whether to use cameras.  Malarkey.  There isn’t a valid reason to choose between one or the other. We can have our cake and eat it too.  A person can drink milk, getting their Vitamin D, and can drink orange juice, getting their Vitamin C.  

Let’s take a closer look at the Cyclops argument being used by these self-driving car developers and take it apart.

First, one of the most important aspects of any self-driving car is its ability to undertake sensor fusion. Sensor fusion involves bringing together the sensory data from all sensors on the self-driving car, and fuse it together into something that makes sense. There is live streaming data coming from the cameras on the car, the radar on the car, the LIDAR on the car, and any other sensors such as sensors indicating the speed of the car, the condition of the engine, etc.  Each of these sensory devices has a particular viewpoint of what the status of the car is. They need to be assessed and merged into a holistic perspective. The AI of the car needs to ascertain what each sensor is telling it, and which sensor to believe and which one to not believe.  For example, if the camera is obscured by dirt and the image is clouded and unusable, the AI needs to then rely upon the radar or some other sensor. If two or more sensors are reporting differing aspects, the AI needs to decide which one is most reliable and most dependable for the particular situation at hand.

Second, let’s take the stance that we’ll strip the self-driving car of any sensors other than cameras. Guess what, we now have a cyclops. We only have one way to see the world around us. Anything that doesn’t go right with those cameras is going to mean that we are blind to the roadway. Driving on a rainy day might cause those cameras to get all smeary with water. With no other sensors to use, the self-driving car is going to be taking some pretty big chances as it drives the road.

Think about how humans and their senses function. You have your eyes, your ears, your nose, your sense of touch, and your sense of taste.  Suppose I go around telling you that your eyes are the most important of all senses. You can get rid of those other senses, I tell you. But suppose you go into a dark room and there is a growling angry bear in the corner. You can’t see the bear. Seems like your eyes aren’t so good for you at that moment. Maybe you should have kept around your ears, just in case. Of course, we can easily come up with valid reasons to have all of our senses and not want to give up any of them.

Third, some might argue that the basis for reducing the sensory devices on a self-driving car is to prevent the AI from getting confused. By having a single form of sensory device, it can just interpret the world in one particular way. For me, that’s not a self-driving car that I want to ride in. As mentioned, it will be vulnerable to the capabilities and trade-offs of that one specific type of sensory device. I would much rather have a multitude of sensory devices. I would hope that the AI system would be robust enough to realize what each sensor provides and how to best interpret it. Using an excuse that the AI might be lousy and can’t handle more than one sensory device is bottom-line an indication that it is lousy AI and should not be driving that self-driving car.

From a costs aspect, I realize that the more sensory devices we pile onto a self-driving car that the more the costs rise to be able to buy that self-driving car. There is definitely going to be a trade-off between having limited sensory devices and a lower cost self-driving car versus a more robust self-driving car loaded with all sorts of sensory devices. At the same time, we need to think about safety. Will a lower cost self-driving car that has only a few sensory type devices be safe enough to be warrant being on the road and carrying human passengers?  I am betting that if such self-driving cars start to become available to consumers, and once they get into a terrible accident or two, we’ll see the regulators jump on this design, as will the courts.  

You can bet that there will be lawyers that will be happy to line-up and sue those self-driving car makers that opted to use just a single type of sensory device on their vehicles.  Did they realize what this would do to heighten risk of death and destruction?  Even if they did not realize it, maybe due to having their heads-in-the-sand, an argument could be made that they should have known.  As the maker of the self-driving car, they should have considered the risks associated with a cyclops approach.  I would guess that nearly any jury would take a dim view of a self-driving car maker that said they were simply trying to bring self-driving cars to the masses. A lofty goal, but minus the safety aspects and you instead have a mass available killing machine.

Overall, I think these advocates trying to push for one sensory device over another are misguided. Some of them do so because they honestly believe in the particular sensory device that they have chosen. They have spent years and years perfecting their understanding of that sensory device. That’s great, but it cannot mislead you into thinking that a self-driving car is going to be sufficient with just that one type of a sensory device. A few of those advocates will also begrudgingly concede that other sensory devices can be there too, but they then try to denigrate those other sensory devices, akin to the LIDAR ripping that I mentioned at the start of this piece.

Anybody that knows anything about these sensory devices being used on self-driving cars already knows that each type has its own strengths and weaknesses. We need to have them all on the self-driving car to ensure that we can get a full sense of what is taking place around the car. Sacrificing the ears to have better eyes, or sacrificing the nose to have a sense of touch, these are mutual exclusions that don’t make sense for self-driving cars. It is a myopic view to think that we need to only aim at one sensory type.

The initial wave of self-driving cars will be essential to whether the public accepts that self-driving cars are ready for prime time. Cutting corners on the sensory devices will unfortunately make self-driving cars more likely to get into bad accidents, and those bad accidents will turn the tide away from self-driving cars. We’ll see the venture capitalists back away from self-driving cars and the media will switch from parading self-driving cars as heroic to instead being villainous death-traps. These sensory device debates are not about adopting VHS versus beta, which some might falsely think, but instead about having a proper and full complement of sensory devices to ensure that highest level of safety when AI is driving that car. Let’s put an end to these cyclops self-driving cars, before they hit the roads and start hitting people.

This content is original to AI Trends.