Going Blind: When Sensors Fail on Self-Driving Cars

1329

By Dr. Lance B. Eliot, the AI Insider for AI Trends.

I was in a hurry the other day and jumped into my car to try and rocket across town for an important appointment. When I started the engine, suddenly my “idiot lights” dashboard lit up and indicated that I had a low tire pressure. I’ve seen this before and from time-to-time have had a tire that was a few pounds low after having driven up to the Bay Area from Los Angeles. In this case, I was taken aback because the dashboard indicated that all four tires were at low pressure.  My first thought was that this was impossible.  How could all four tires be low at the same moment in time?  Then, after a fleeting thought that maybe someone slashed all four tires, I got out of the car to take a look at them. They appeared to be intact. I luckily had a tire gauge in my car and used it to measure the amount of air in the tires. Seemed like they were properly inflated.

I opted to turn off the engine and start the car again. The four tires still showed as though they were at low pressure. This was becoming irritating and frustrating, and of course was taking place just when I was in a hurry to get someplace.  Murphy’s law strikes! I decided that since the tires are run-flat tires that allow you to drive when they go flat, I would go ahead and slowly ease out of the parking lot and see what happens.  I proceeded like a timid driver and made my way inches at a time toward the opening to the street. One by one, the low tire pressure sensor dashboard lights went out, suggesting that I was now OK with my tires.

I am sure we have all had circumstances whereby a sensor in the car goes bad or sometimes is momentarily faulty.  We expect this aspect of any kind of mechanical device on our cars. Our headlights sometimes fail and need to be replaced. Our brakes get worn after a while and the brake pads need to be replaced.  No car is perfect. No car is maintenance free. For some people, they are “lucky” and seem to never have anything go wrong on their car. Other people get a “lemon” of a car that seems to be unlucky and always has something going wrong.  We generally expect that an older car is going to have more problems and maintenance.  We generally assume that a cheaper car is going to have more problems and more maintenance than an expensive car.  We also expect that an expensive car will likely have expensive maintenance whenever maintenance is required.  These are the laws of nature about sensors and devices on our cars that can falter or fail.

What about self-driving cars? You don’t hear much about sensors going bad on self-driving cars. But, that’s for a very apparent reason.  Self-driving cars right now are like well-cared-for high-end Nascar racing cars. Teams of engineers fret about any little blip or blemish on their precious self-driving cars. The sensors on these prototype cars are costly and kept in really good shape.  If a sensor happens to become faulty or go bad, an engineer quickly removes the offending item and replaces it with a brand new one.  Realizing that the self-driving car makers are spending millions upon millions of dollars to develop and perfect self-driving cars, you can bet that any sensor that goes bad is going to instantly get kicked out and replaced by a shiny new one.

This makes sense when you are trying to develop something new and exciting.  Think though about what will happen once self-driving cars are actually on-the-roads and doing their thing each and every day. Eventually, we are going to have everyday self-driving cars that are going to be subject to the same vagaries as our everyday cars today.  The brakes are going to wear out, the headlight beams will go out, and the specialized sensors such as the cameras, the LIDAR, and the radar sensors will all ultimately have some kind of failure over their lifetimes. In fact, you could predict that the faults and issues of sensors is going to be even more heightened on self-driving cars because they are chock full of those sensors.  There might be a dozen cameras, another dozen radar sensors, one or two LIDAR systems, and so on.

Welcome to a new world of sensor mania in the realm of self-driving cars. For those that make replacement car parts and do automotive maintenance, this actually could be a blessing in disguise. Imagine hundreds of millions of cars with then tens of hundreds of millions of sensors, all of which will be statistically failing at one time or another. Bonanza! The odds are too that these sensors at first won’t be easily attached or embedded into the car in some simple fashion. More than likely, trying to replace these sensors is going to require doing all sorts of surgery on the car to get them out and replaced.  Furthermore, once you remove and replace the sensor, the amount of testing to make sure that the new sensor is working properly will take added labor time. Those dollars are racking up.

Nobody wants to utter these aspects when discussing self-driving cars. Instead, we are told to think about a utopia of these self-driving cars whisking us all around town and the humans don’t have a care in the world.  Have you ever seen a bus that is parked on the side of the road because it had a failure of some kind? Ever been on a subway that slowed down or stopped because of some kind of systems problem or failure?  Mass transit systems have these kinds of faults and failures all the time. Our autonomous AI-led self-driving cars are just as susceptible to breakdowns, and as mentioned even more so due to the plethora of gadgets and gizmos that enable the car to do its self-driving.

Besides the obviousness of the hardware sensors, we must also consider that these upcoming self-driving cars are going to have boatloads of computer processors on-board, which is what makes the AI aspects possible. Memory in those chips can go bad, the processors themselves can wear out or bust, and other various hardware maladies can occur.  So far, I’ve only emphasized the hardware, but we need to think about the software too.  Suppose there is a hidden bug in the self-driving car software (this is a topic I’ll be covering in a future column). Some self-driving car makers also are interconnecting their self-driving cars by using the Internet, including so-called over-the-air software updates.  The hardware that allows these interconnections can go bad, plus the software updates pushed into the self-driving car can get pushed incorrectly or get load improperly.

I hope this doesn’t burst that self-driving car utopia that some are dreaming about. Realistically, we need to anticipate that stuff will go wrong and stuff will break. Right now, few of the self-driving car makers are developing their systems with sufficient redundancy and back-up capabilities.  They are so focused on getting a self-driving car to simply drive a car, they figure that once they’ve got things perfected that then they can go back and look at the resiliency aspects.  I understand their logic, but at the same time, trying to bolt onto a system an added layer of redundancy is better done at the start, rather than trying to kludge it later on.

If a camera on the front right bumper goes bad, the AI should detect it.  Images might be blurred or otherwise no longer interpretable. The AI needs to then consider what else to do. Assuming that there is a camera up on the hood on the right side, this camera now might need to be considered a “primary” for purposes of detecting things in front of the car on the right side since the camera on the right side bumper is considered out-of-commission. The radar and LIDAR to the right might now become more vital, making up for the failed camera on the front right bumper. For any instance of a sensor that goes bad, the AI needs to assess what else on the self-driving car can potentially make-up for the loss.  It is like having someone poke you in one eye, and then you need to become dependent upon the other eye.  You might also adjust how you walk and move, since you know that you cannot see out of the eye on that side of your body.  The self-driving car might need to do the same, hampering certain kinds of maneuvers that the car would usually make, or even ruling out some maneuvers. Maybe the self-driving car opts to only make left turns and not make any right turns, until the sensor can be replaced.

Consider the circumstances of when a sensor might go bad. If the car is in motion, the nature of the failed sensor could lead directly to a severe result. If you are moving at 80 miles per hour and the LIDAR is your only means of seeing ahead, and if the LIDAR suddenly drops dead, you’ve now got a speeding missile that in a few seconds could ram into something.  I realize that for the levels of self-driving cars that require a human driver be ready to take over that you might argue that the human needs to grab the controls in this instance, but as I have repeatedly exhorted in my columns this aspect of dropping the control of the car into the lap of a human driver is fraught with great peril (they won’t have time to decide what to do, and even if they decide they still need to take physical control).

And, what about the utopia of the level 5 true self-driving car that has presumably no controls at all for the humans to drive the car? What happens when an essential sensor goes bad and there is no provision for the human to drive, even if they or the AI wanted them to do so?  This is more than a scary movie, this is real-life that we are heading towards. Level 5 self-driving cars that once a crucial sensor goes bad will potentially enable a multi-ton vehicle to become a grim reaper itching to kill something or somebody, it’s a scary plot for sure.

Suppose the self-driving car is stationary and a crucial sensor goes bad. This might be okay in some cases, assuming that the self-driving car is parked and out of the way of traffic. If instead the self-driving car has come to a halt at a red light, and the sensor suddenly fails, now you have a car blocking traffic. Other traffic might be kind and gently steer around the stopped self-driving car. Or, you might have some other car that drives up and doesn’t notice the stopped self-driving car, and rams into it, harming the occupants.

You might also have the case similar to my low tire pressure story, in which you start the self-driving car engine, it runs through internal diagnostics to make sure the sensors are good, and then maybe discovers a key sensor that has gone bad. If you are in a self-driving car that is below a level 5, you presumably could decide to disengage the capability that involves the sensor and then drive the car yourself. This also brings up a larger question about the features of a self-driving car, namely, how much should the human driver be allowed to override or turn-off a self-driving car feature? We are used to being able to decide whether to engage cruise control, and we can readily disengage cruise control whenever we want.  Should the same be said of the other more advanced capabilities that will be in our self-driving cars?  This is an open question and we are seeing some self-driving car makers ignore the issue, while others are deciding a priori whether to allow this or not (we’ll likely be seeing regulation on this, see my column about the regulatory aspects of self-driving cars).

In this discussion, I’ve pretended that the self-driving car can actually detect that a sensor has gone bad. But, suppose that a sensor is still functioning, but only intermittently? My low tire pressure story is similar to this intermittent aspect in that the sensors seemed to reboot themselves, though it could readily have reoccurred. The AI needs to be able to ascertain not only if a sensor is failed entirely, but also whether it might be buggy and so then take appropriate action.  The AI might try to reboot the particular sensor, or might opt to only collect data when the sensor seems to be functioning correctly.

More insidious is the sensor that does not appear to be faulty and yet really is faulty. Suppose the AI is getting streams of data from the LIDAR and so as far as the AI knows it is working properly. Imagine that every two seconds the LIDAR is integrating noise data into the stream, caused by an anomaly. The images being constructed by the AI might not realize that this bogus data is being slipped into the processing.  Sensor fusion takes place and the “bad data” gets mixed into the rest of the data. Ghost images or fake images might be appearing. This might lead the AI to take action such as avoiding an obstacle that is not present.  The act of avoiding the obstacle might involve doing a radical maneuver that endangers the occupants of the self-driving car.  All of this perhaps being caused by a faulty sensor that was not so obviously faulty that it could easily be detected (there is also the instance of a sensor that has been hacked, see my column about cybersecurity and self-driving cars).

It is time to put serious attention into the redundancy and resiliency of self-driving cars. In my opinion, even a true level 5 self-driving car that does not have redundancy and resiliency is a cheap-trick level 5 car.  In one sense, it is a car entirely driven by automation, but it is also a potential death trap waiting to harm or kill humans because it is not prepared to handle internal failings of the car itself. A dashboard display that tells you that something has gone awry is not going to be sufficient when us humans are so dependent upon the AI and the self-driving car to drive the car.

Anyway, the silver lining is that there will be a boon in the marketplace for replacing all those bad sensors once they fail, and a spike in skilled labor that can do the replacements will arise shortly after self-driving cars are sold widely.  The you-will-be-out-of-a-job car mechanic of the future should not be overly worried that self-driving cars will put them out of business.  Instead, with self-driving cars crammed full of specialized equipment, which will surely falter and fail over time, the job prospects for those mechanics is looking pretty good.  Time to get my car mechanics license.

This content is original to AI Trends.