When Accidents Happen to Self-Driving Cars

1568

by Dr. Lance B. Eliot, AI Insider for AI Trends

Sinkholes. We recently had a major sinkhole that opened up in the middle of a busy street here in Southern California, and it swallowed whole the two cars that happened to unluckily be driving along the street at that time. Imagine being able to tell your friends that your car fell into a sinkhole. Your car didn’t hit a pothole, it didn’t sideswipe a telephone pole, it didn’t get hit by lightning, instead it fell into a sinkhole. That’s some great bragging rights.

One of the cars that had fallen into the sinkhole had only sank a brief distance and was caught on top of the first car that fell into the hole. The back portion of this second car protruded out of the sinkhole, raising up a few feet above street level. The woman driver in the car was able to get out, partially aided by the fire department which had shown up to rescue the people in both vehicles. She was slightly injured, but otherwise Okay from the ordeal. I suspect though that she is going to be envisioning the street opening up whenever she drives around town from now on.  Anyway, her car was still in gear when she managed to extradite herself from the vehicle.  On the newscast of the event, the rear tires of the car were shown spinning vigorously as the vehicle thought it was still trying to drive along the street.  This actually made for a dangerous situation since no one knew what the now abandoned car might do next. Gradually, it lurched forward and sank deeper into the sinkhole and eventually stopped running.

Why my fascination with cars that got swallowed by the earth? The one car that had kept running is an example of what a car might do when in an accident. In other words, some car accidents involve a car crash that causes the car to stop functioning.  Other accidents might involve a car that still has the engine running, which can create a grave hazard for everyone near the scene of the accident. It has been the case that some cars in a car crash have suddenly moved forward or backward, endangering the driver, passengers, and rescuers. There is also a heightened chance of a fire or maybe even an explosion, since the car is engaged and there is fuel flowing to the engine, along with likely sparks and hot pieces of metal around the accident scene.  All in all, a car accident and the surrounding scene can be a very dangerous place.

Let’s consider what will happen when self-driving cars are involved in an accident. Now, some of the ardent proponents of self-driving cars will immediately counter that there is no such thing as a self-driving car getting into accident.  They are of the camp that believes that self-driving cars will be an idealized world wherein no cars will ever get into accidents again.  This is plain hogwash. I offer in my column about the falsehoods about zero fatalities related to self-driving cars that there will still be car accidents, in spite of whatever wondrous AI we see embodied into self-driving cars.  There are going to be a mix of human driven cars and self-driving cars for quite a while, and the two are bound to tango with each other.  There are also lots of other opportunities for self-driving cars to get into accidents, including if the self-driving car has a severe hardware failure within itself, and also if the AI of the self-driving car encounters a bug in the software, and so on.

Assume for now that it is quite possible and actually very probable that self-driving cars will get into accidents.  So what, you ask? The issue is that if the AI system of the self-driving car is still active, what will it do? For a human driver in a car accident, the human usually opts to stop trying to drive the car. They typically will try to get out of the damaged car and step away from the mechanical beast. This is not always the case, and of course if the car accident is minor, the human driver might decide to drive off from the scene. We also know about circumstances of hit-and-run, wherein the human driver hits someone or something, and tries to scoot away without anyone else knowing what happened.

An AI-based self-driving car will need to be self-aware enough to know that the car has gotten into an accident. Humans know this pretty quickly by having felt the blow of the accident, they can see the crushed metal and blood, they are physically hurt or restrained, they can smell burnt metal or spilled gasoline, and so on. There are lots of physical sensory clues for humans. A disembodied AI computer-based system won’t necessarily be able to gauge these same physical clues. Sure, the car will likely have come to a sudden halt, which is a clue that something is amiss. The cameras on the car might have seen the accident and the AI system can interpret the images captured accordingly. We could have other sensors in the car such as impact sensors and other devices that realize the car has gotten itself into trouble.

The key is then whether the AI system of the self-driving car knows what to do, once it detects that a car accident has indeed happened. The AI system might continue to try driving the car, pushing on the accelerator, even though the car no longer can or should be driving. It is like the example I gave before of the car that fell into the sinkhole and the tires continued to spin. That was a “dumb” car that did not have any AI smarts. AI developers for self-driving cars need to make sure that the system can detect that an accident has happened, and then take appropriate actions based on the accident.  This might include applying the brakes, turning off the engine, and taking other safety precautions.

Will though the AI self-driving car still be able to process information and take actions? Remember that once the accident has occurred, all bets are off as to what parts of the car are still functioning. Maybe the AI system no longer has access to any of the controls of the car. Or, maybe the AI system itself is being powered by the car, but now the car is no longer running and the battery was ejected from the car during the accident.  No power, no AI. All of these variations mean that we don’t know for sure that the self-driving car will be in a shape needed to take the appropriate safety precautions.

It is also possible that part of the AI system itself is damaged during the accident. Some sensors might be entirely offline. Some sensors might be working, but are noisy and incomplete. Some sensors might be “working” but providing incorrect data because they are no longer functioning as intended. Imagine if a sensor that detects the motion of the car is damaged in such a manner that it falsely reports that the car is still driving forward. The AI, if functioning, might be misled into trying to command the controls of the car in an untoward manner.  Any passengers or rescuers could be put into danger because of these facets.

Some believe that fire departments and police should have an electronic backdoor into the AI system of the car, so that upon coming upon a self-driving car accident scene, the humans can communicate directly with the AI system. They can use this communication link to instruct the self-driving car to do things, such as turn off the engine of the car. They can use the link to find out what happened in the car accident. The AI might also know how many passengers there are in the car. This could help the rescue efforts, since the responders would know how many people to be rescued. For many important reasons, this backdoor electronic communication makes a lot of sense.

As with any of these aspects, there are downsides to an electronic backdoor. Will only the proper officials use the backdoor, or could a nefarious hacker use it to take over the controls of your car? Even if the backdoor is there, maybe the AI system is so damaged that any information it provides is incorrect or misleading. One might also wonder about the privacy aspects of this electronic backdoor too. Will humans be comfortable that anything the AI system has recorded could now so easily be scanned by someone else, doing so without a legal search warrant?

Self-driving car makers are considering having the same kind of black boxes in their self-driving cars as are found in modern airplanes. This black-box hardened casing of crucial systems would not only record information, but also try to protect the AI system so that it could continue to function in an accident. This might not be the entire AI system, and perhaps just a core portion that can do fundamental activities and no more.

There are also advanced efforts to make AI systems more resilient so that if only part of the AI system is still functioning, the other parts recognize as such, and then adjust accordingly. For example, suppose the AI system portion that provides the steering and mapping gets damaged, other parts of the AI system can either try to operate those aspects as a secondary back-up, or take into account that those functions are no longer working and avoid anything that requires those functions.  This adds a lot of complexity to the AI system, but given that the self-driving car involves the life-and-death matters of humans, having sufficient complexity to protect humans is worth the added effort.

The AI system can even have a component devoted to saving humans when the car gets into a crash. Suppose the AI system is able to release the seat belts, automatically, when it so chooses to do so. Once a car crash has occurred, passengers might be trapped in the car and unable to reach their seatbelt releases. Or, the passengers might be unconscious.  The AI system, assuming it is working properly during the crash aftermath, could take actions that would help the passengers and aid rescuers. This comes with the downside that the AI system might make the wrong choice, like release a seatbelt that was holding a human that was upside down in a rolled over car, and they then drop to floor and get hurt by the seemingly innocent and helpful act intended by the AI system.

Few of the self-driving car makers are putting much attention to what the AI system should do during an accident. They are blissfully unaware of the considerations. They figure that once the car crashes, the AI system is no longer involved in what happens next. There could be some kind of switch that tries to automatically disengage the AI once the car crashes, and so it turns the car into one large somewhat immovable multi-ton object. This can be useful in some crashes, and not so useful in others.  For example, suppose the car crash left the car still drivable, and you wanted to get the car off the road and onto a side street. In what instances should the AI be auto-disconnected and in other cases left on to help get the car out of the way or to greater safety?

AI researchers are looking at machine learning as an aid for figuring out what to do during a car crash. Imagine if you had the “experience” of thousands and thousands of car crashes and so could try to discern what to do during any particular car crash. This can especially be crucial when the moment of the car crash begins, since the evasive actions of the self-driving car can potentially produce fewer deaths and injuries. The self-driving car might realize that swerving will lessen the impact to the passengers, or maybe sharply hitting the brakes might reduce the injuries. This also raises the question of the ethics of the AI, for which I’ve provided a column that addresses those tough kinds of decisions that need to be made.

I am keenly of the camp that says let’s not leave to chance what will happen when a self-driving car gets into an accident. We need to be explicit about what the car and AI will do. We need to know whether there are redundancies and safeguards built into the AI system and the overall systems of the self-driving car. If we don’t carefully think about this, it will be by “accident” that when accidents happen that people are saved or killed.  I would rather that my self-driving car has a purposeful built-in approach to handling accidents. We know for sure that self-driving cars aren’t going to be accident free, and so car makers need to make the cars as smart to drive as they are smart enough to cope with accidents.