Graceful Degradation System Handling for Self-Driving Cars

1515

By Dr. Lance B. Eliot, the AI Trends Insider

My daughter was driving her car the other day on a steep incline and after she came to a stop at a red light, all of a sudden the car shut off. No warning. No sputtering sounds. The engine just died. Immediately, all of the dashboard warning lights came on. It was not possible to even discern which one might be a true indicator of the aliment because they were all illuminated at once. Of course, this was quite unsettling.

After a brief moment of being taken aback by the aspect that the car was no longer running, she took the car out of Drive, put it into Park, and attempted to restart the engine. She was anxious to have the restart work, especially since there were other cars behind her, and once the light went green it would be a mess if she wasn’t able to move forward. She anticipated a cacophony of horns and angry yells to get out of the way. Unfortunately, the car didn’t restart at first. She tried again. Still didn’t start. She tried a third time, and luckily the engine started.

Upon calling me to let me know what had just happened, I recommended that she take the car right away to a nearby auto mechanic to have it inspected. She opted to drive around and see if it would repeat. It did not, and so it was shrugged off as a random fluke. In my experience, once a car exhibits any kind of failing, I become highly suspicious of the car. I’ve had car mechanics that would look for an anomaly once I brought them a car that had experienced an ill moment, and even if they found nothing amiss, I insisted they try again. I figure that if a car falters once, that’s on the fault of the car, but if the same thing happens twice then it is on the fault of me. In essence, trick me once, okay, but I refuse to be tricked twice.

What does this have to do with self-driving cars?

At the Cybernetic Self-Driving Car Institute, we are developing AI that deals with how to handle a self-driving car that is experiencing some kind of malfunction.

For some auto makers, they talk about their self-driving cars as though they will never breakdown. I’ve heard politicians and other pundits say the same thing. Miraculously, self-driving cars are going to run flawlessly. Nothing will ever falter. They will be roadway machines of perfection. What a wonderful world it is going to be. Self-driving cars that drive themselves and never need to be fixed, never succumb to any machinery issues, they just keep on going, like the Energizer bunny.

What a crock!

Cars are cars. A self-driving car is still a mechanical device that is prone to having parts that wear out, parts that go bad, parts that might not have been defect free to start with, and so on. Self-driving cars will age. Aging cars have more breakdowns. Self-driving cars will need repairs. Self-driving cars will need replacement parts. It’s a car. It is not a magical flying carpet.

We have to get our heads out of this Utopian world of self-driving cars that are going to save the planet and so therefore are pure and pristine. Sure, self-driving cars will do a lot of interesting, novel, and useful things. At the same time, they will have the same failings of non-self-driving cars. Tires that go flat. Transmissions that fall apart. Spark plugs that need to be replaced. Engines that need to be rebuilt.

In one sense, you could even make the case that self-driving cars are going to have more troubles and failings than non-self-driving cars. This is logical because a self-driving car is filled with all sorts of high-tech that a non-self-driving car does not need. Into a self-driving car there will be numerous cameras, numerous radar devices, numerous sonar devices, perhaps LIDAR devices, and so on. Guess what happens when you start piling more and more physical devices into something? You have more things that will wear out or break. And, consequently, more things that need to be repaired and replaced.

Furthermore, you need computer processors to run the systems and AI. You need computer memory and various electronic storage devices. These too are going to wear out or break. In some respects, the self-driving car is going to be a dream for car mechanics and car repair shops. After the newness of the self-driving car has occurred, and once they start getting some real mileage on them, we are going to see those self-driving cars head into the repair shop. The cost to repair and replace is going to be high. That’s because you are going to be replacing and repairing not just the conventional parts of the car, but also having to replace the high-tech high priced components too.

In fact, if you look closely at many of the self-driving car designs, there is not much thought being placed around how you can readily remove, replace or repair the high-tech components. No one thinks about that right now. They are just trying to get self-driving cars onto the roadway. Who cares what it will take to fix them. Nobody does now. It won’t be years until self-driving cars are pervasive, and anyway those first models will be bought by those that have the wealth to afford a shiny new self-driving car. For them, the repair costs won’t be a big concern. All of this is not going to sink into the social consciousness until after self-driving cars are widespread and when mid-income to lower income owners are able to buy them.

Anyway, let’s get back to the key notion here that self-driving cars are going to falter at some point during their driving career. It is undeniable.

What will a self-driving car do when part of it falters? You would hope that the self-driving car would anticipate that things will go awry. Auto makers are not especially creating redundancy in the high-tech components (which would drive up costs of the car), and nor are they crafting the systems and AI to be able to cope with malfunctions. If a self-driving car is at the levels less than a 5, which means that it is a self-driving car that still relies upon a human driver, the auto makers assume that the human driver will just take over control of the self-driving car.

Though I have heartburn over that assumption, I’ll for the moment skip past the problems of that way of thinking, and instead point out that a Level 5 car had better take into account malfunctions. A Level 5 self-driving car is a car that is driven by the AI and can do anything that a human driver can do. Thus, there is no need for a human driver in a Level 5 self-driving car.

Let’s take the case of my daughter and her car that faltered while on a steep incline. If a Level 5 self-driving car were driving that car, and if the car engine had died while at a red light, we need to ask what would have happened next? Right now, the AI of most self-driving cars would maybe detect that the engine had quit. It would then likely do nothing other than alert the occupants of the self-driving car that the car has come to a halt. That’s not very helpful, I’d say.

Our AI component for self-driving cars takes into account the myriad of ways that a self-driving car might falter, and then has ways to try and cope with it. For example, in the case of the car engine that suddenly died, the AI first tries to assess what happened, and also whether anything else is amiss on the car. My daughter tried to restart the car, but she probably would not have done so if say there was fire and smoke in the engine compartment. She would have realized that starting the engine would likely have been a bad idea in that circumstance.

Similarly, the AI needs to assess the contextual factors of the situation to try and ascertain what appropriate action to take.

We refer to the ability to deal with failings as form of coping with degradation of the functionality of the vehicle. It is our goal that the AI can achieve a graceful degradation, meaning that it tries to leverage whatever it can to keep the car going, if safe to do so, and tries to avoid aspects that get the self-driving car and the occupants into dire circumstances.

The AI has a set of scenarios about the permutations of limited functionality. There could be problems with the self-driving car that allow the car to still be driven. For example, a flat tire on a car with run-flat tires can still be driven. But, it is recommended that you drive below a certain speed, such as 55 miles per hour, and you try to limit driving to a more mild form of driving. The AI goes into a mode that befits the limited functionality presented by the car.

This also means that the AI has to be able to determine what is working on the car and what is not working. A good self-driving car design must include the ability to check the status of the components of the car. Fortunately, most modern cars already have such capability built into them for the conventional elements of the car. We need to also make sure that the added high-tech elements that are there for the self-driving car capabilities are also being crafted to have self-diagnostic capabilities.

Let’s focus on the failing or degradation of the add-on high-tech elements for a self-driving car.

Suppose a camera at the front of the vehicle seems to be experiencing a malfunction. The AI needs to try and detect whether the camera is entirely unusable, or maybe it is partially usable. If partially usable, what aspects of the video or pictures captured are reliable and which are not?

It could be that the camera no longer has a wide view and can only provide a narrow view. If so, the AI needs to then ascertain what impact it has on the sensor fusion and the detecting of the real-world driving situation. Maybe the radar now becomes more prominent in trying to detect what is ahead, while the camera becomes secondary in importance.

Balancing the capability of one sensor against the other becomes crucial in these situations. The AI must be aware of which sensory device is providing what kind of insight about the driving situation. There is also the possibility that more than one sensory device at a time will falter. Suppose the front bumper of the self-driving car has struck something in the roadway. The right headlight is busted, the right sonar and radar devices placed near the bumper are no longer functioning, and the long-view camera there is now working only intermittently. The car is still drivable, but now the car is somewhat blinded to the roadway and the driving circumstances.

A human could still drive the car. But, with a Level 5 car, there is no provision presumably needed to allow for a human driver, since the car is supposed to be drivable entirely by the AI. Thus, the AI needs to be able to figure out how to deal with this situation. If driving on a freeway, the AI might update its action plan to safely and progressively drive the car off the freeway and onto side streets.

For failing aspects, there are typically two ways to deal with a failing component, either do a failing “open” assumption or make a failing “closed” assumption.  A failing open assumption is that the system should allow for the item to be considered on, even if it is not well registering. For example, if in a building there is a power outage and the doors are being controlled electronically, but there’s no power to open the doors, the building system might have as a default that it is better to allow the doors to be unlocked and open, rather than being locked and closed. In the case of say a bank vault, it is usually the opposite, such that if the power goes out, the bank would prefer that the vault doors are closed and cannot be opened.

The same is the case for the self-driving car. The AI has anticipated that under various scenarios there are some of the high-tech components that will be considered to fail and be placed in an open position, while others are to be placed into a closed position. It all depends on the nature of the component and what it does, along with what kind of redundancy and resiliency it has built into it.

Auto makers are right now playing a somewhat dangerous game about how they are designing their self-driving cars.  Allow me to explain.

There is something called an Error Budget, well known amongst systems designers, which refers to the notion that there is a balance between the cost of building in reliability and resiliency into a system and the pace of innovation. Generally, the more you put into the reliability and resiliency, the more it tends to retard the pace of innovation. Since the impetus to get to a self-driving car is right now all about getting there first, the pace of innovation has the highest attention and drive.

Only once we have self-driving cars commonly on the road will the fact that the cost of reliability and resiliency was forgone will become apparent. One can only hope that the pace of innovation was not so frantic that the self-driving cars are useless when it comes to dealing with malfunctions. We also need to deal with the rather unsettling idea that the AI itself might malfunction. This is why our Lab has been developing AI self-awareness, trying to be able to detect and take action if the AI of the self-driving car has gone amiss. It can happen, and it will happen that the AI will go amiss, since the AI is being reshaped while the car is being driven (it is using machine learning and so continually changing).

Graceful degradation needs to apply to all facets of a self-driving car. This includes the conventional parts of the car, the high-tech components needed for a self-driving car, and the wizardry AI that is driving the self-driving car. Let’s build graceful degradation into it now, and not wait until later on, once self-driving cars have faltered on the roadway and led themselves and their occupants into dire situations.

This content is originally posted on AI Trends.