DUI Drunk Driving by Self-Driving Cars: Prevention is the Cure

1108

By Dr. Lance B. Eliot, the AI Insider

During Memorial Day weekend, drunk drivers come out in droves. I was on-the-road after attending a beach BBQ, and saw on Pacific Coast Highway (PCH) a car that was weaving back-and-forth across the lanes of traffic ahead of me. He also was sporadically speeding up and then slowing down.  Though I could not see the actual driver, his driving behavior was a giveaway that he was likely drunk or as some prefer to say “alcohol-impaired.” I opted to remain a sizable distance behind him.  Meanwhile, other cars around me weren’t willing to stay behind the weaving car and decided to drive past him. As they did so, he nearly swayed over into them. It was a very dangerous situation that was playing out at speeds of 40 to 65 miles per hour.

Statistics in the United States are that about 30 people die in car crashes everyday due to drunk drivers. Sometimes it is reported statistically as there being a drunk driver death every hour. I’ve covered some of these facets in my column about accident driving stats and self-driving cars.  Most of the mass media articles about drunk driving and that also touch upon self-driving cars are about how the advent of self-driving cars will presumably do away with any drunk driving related deaths.  I debunk this in my other piece. In this piece, I’d like to hone in on a particular facet that no one seems to be willing to talk about, namely when a self-driving car drives like a drunk driver.

I realize you might be taken aback by that statement. How can a self-driving car ever drive like a drunk driver?  The self-driving car isn’t consuming large quantities of alcohol. It can’t go over to the nearby liquor store and get a fifth of scotch.  On the surface, my claim that a self-driving car might drive like a drunk driver seems fallacious. We know that a computer is not going to drink alcohol, and so it seems impossible to have a self-driving car wherein the AI is drunk. Well, get ready to have your eyes opened.

At our Cybernetics Self-Driving Cars Lab, we have been exploring what happens when a self-driving car drives in a drunken-like manner.

I am not suggesting that the AI gets drunk, but instead emphasizing that the AI acts in a manner that we associate with drunk driving.  For example, in my story about driving down PCH, I mentioned that the car ahead of me was weaving across lanes and sporadically speeding up and slowing down.  I could not see the actual human driver, and nor could I give a road sobriety test to the driver. All I knew and could detect was that the car was acting in either illegal ways or at least unsafe ways, and I deduced that likely the driver was drunk.  The driver might have been suffering a heart attack and not actually been drunk, or maybe the driver was swatting at a bee that was inside the car and trying to sting him.  I don’t know for sure he was drunk.  I knew for sure that he was driving in a drunken-like manner.

The same can be said of a self-driving car. There are circumstances under which a self-driving car might drive the car in a manner that we would infer implies drunk driving. I know that some advocates of self-driving cars will get very angry with me about this and will fiercely defend that no self-respecting self-driving car would ever drive in an amiss manner.  In their utopian world, all self-driving cars are driving perfectly all of the time.  What a crock. Even worse is that this kind of claim as made by so-called self-driving car experts is misleading the public and regulators.  In the end, this is a ticking time bomb in that ultimately we will realize that self-driving cars can drive like a drunk drivers, and once someone gets hurt by a “drunk driving” AI self-driving car, there will be heck to pay as the public turns sour on self-driving cars and the regulators are forced into putting in place Draconian laws that will disrupt and suppress the progress on self-driving cars.

Why would a self-driving car drive in a drunken manner? There are several ways in which this could readily arise. Let’s take a look at the most common ways that this can happen.

Faulty Sensors.

A self-driving car relies upon the sensors that are mounted on and in the car to be able to sense the world around it. These sensors include cameras, radar, LIDAR (see my column on LIDAR), and other capabilities.  Suppose that a sensor becomes faulty. The sensor fusion by the AI might be misled into believing that there is a car next to it that is trying to come into its lane, and so the self-driving car suddenly changes lanes, even though the other car is not really there (it is considered a “ghost” concocted by the faulty sensor).

This kind of lane changing and speeding up and slowing down could be undertaken by a self-driving car that is getting fed faulty data by its sensors. The AI believes it is doing the right thing and protecting the car and its occupants. Meanwhile, if we were watching the self-driving car, we would think it was drunk driving. We would have no ready way of knowing that the faulty sensors are getting the AI confused. This is analogous to the drunk driver that we don’t know for sure is drunk and we need to infer from his wanton behavior that he must be.

Fusion Issues.

The sensory data coming into the self-driving car is being assembled and analyzed via a process often known as sensor fusion. The sensor fusion process consists of piecing together the various sensory data coming from the multiple sensors, and then trying to craft a single comprehensive view of the world around the self-driving car.  This requires merging together the radar data coming from several radar devices dispersed around the car, merging together camera images and video streaming coming from cameras mounted all around the car, merging together LIDAR data being collected in 360 degree sweeps, and so on.

Software that is doing the sensor fusion can have bugs in it. These bugs might mislead the system into believing that the outside world is different from reality, and so this is then fed into the AI that has to decide how to drive the car. If the fusion is telling the rest of the AI that there is debris in the roadway up ahead, which maybe is falsely believed based on a mistake in the fusion algorithm, the AI is going to swerve the car to avoid the non-existent debris.  From an outside perspective, all that we would see is the self-driving car making an unnecessary radical swerve and we would be perplexed since there was no apparent reason to do so. We’d think it was drunk driving.

Machine Learning False Learnings.

The AI of the self-driving car is often learning about how to drive via the use of machine learning. The machine learning is based on tons of data that is fed into the system. Machine learning can be so complex that we don’t know for sure what the system “knows” and nor why it knows what it knows. In a sense, it is like a black box. The behavior of the system is what tells us whether the machine learning is doing a good job or not.

Suppose that the machine learning found a pattern amongst traffic data that suggested that whenever a red colored car was ahead and going more than 80 miles per hour that it was likely that the red colored car will make a rapid lane change into the lane next to it. Based on this trend, the machine learning might then be triggered that if it detects a red colored car that meets that criteria to then take the “safe” action of preemptively making a lane change to avoid having that red colored car merge into it. For those of us observing the self-driving car, we’d have no idea as to why it suddenly opted to change lanes. We might think it was drunk driving.

AI Algorithmic Probabilities and Uncertainties.

Any true self-driving car must contend with probabilities and uncertainties. The real-world of driving is not a one-hundred percent guaranteed situation. Will that pedestrian step off the sidewalk and into the path of the self-driving car?  Assign a probability to it, and then the self-driving car will react accordingly. Will that big truck to my right not realize I am in its blind spot and it will try to change lanes into me? Assign a probability to it. There are lots and lots of probabilities and uncertainties involved.

When dealing with probabilities and uncertainties, the self-driving car and its AI is going to take actions based on various thresholds. If it believes that the pedestrian is going to step off the sidewalk, the AI will then take evasive action like instructing the self-driving car to come to a sudden halt.  Suppose the pedestrian does not attempt to dart into the street. All that we would see is the self-driving car inexplicably coming to a halt. Bizarre, we might think. Drunken driving, we might ascribe.

Computer Processors and Memory Issues.

The AI that is driving the self-driving car must rely upon lots of computer processors and lots of computer memory to perform all of its calculations and efforts.  These processors and their memory are hardware components that can always have the chance of going faulty or failing. Think about your home PC that often runs out of memory and you need to reboot. I am not saying that the processors and memory of the automobile system are the same per se, but merely pointing out that they are hardware and will gradually and eventually breakdown.

If the computer processors or memory go bad, it can impair the AI software. If the AI software is impaired, it might render decisions to the automotive controls of the car that aren’t intended. The next thing you know, the self-driving car is making seemingly strange turns and actions that we can’t readily explain. Drunk driver.

Internet or External Communications.

Most of the self-driving cars are relying upon external communications such as the Internet to convey aspects of how they are driving to some kind of centralized system. The centralized system is collecting data and using that to do galactic style machine learning that can be shared back to the individual cars and whatever individualized machine learning they are doing.

Imagine that the external communications feeds some kind of instructions into your self-driving car and your self-driving car opts to believe that it should take some kind of evasive action, erroneously, but it doesn’t realize it. For example, the centralized system reports that there is a massive pile-up of cars ahead and so get off the freeway right away to avoid it.  If we were watching the self-driving car and saw it dart to a freeway exit, we might not know why and would wonder whether it is exhibiting drunk driving behavior.

The aforementioned aspects are all realistic ways in which a self-driving car could be considered to be acting like a drunk driver.  The types of actions that we might see include these:

  •        Swerving across lanes needlessly
  •        Straddling a lane without apparent cause
  •        Taking wide turns rather than proper tight turns
  •        Driving onto the wrong side of the road
  •        Driving onto the shoulder of the road
  •        Driving in an emergency lane
  •        Driving too slowly for the roadway situation
  •        Driving too fast for the roadway situation
  •        Nearly hitting another car
  •        Cutting off another car
  •        Nearly hitting a pedestrian, bicyclist, or motor cyclist
  •        Being too close to the car ahead of it
  •        Stopping when it seems unnecessary
  •        Rolling past stop signs
  •        Running a red light
  •        Other

Any and all of these are actions that a self-driving car might take.  The self-driving car might do these actions by intent, meaning that it thought the action was warranted given the existing driving conditions, or it might do it by mistakenly invoking a routine that should not have been invoked. For example, the AI might have a routine or algorithm for purposely driving on the wrong side of the road, which can happen in certain situations, which you as a human driver have undoubtedly encountered. That specific routine could be intentionally or intentionally invoked by the AI, and the next thing you know the self-driving car has gone into the opposing lanes of traffic.

What are we to do about self-driving cars that seem to be drunk driving?

First, we need to make self-driving cars as safe as possible so that they won’t do the drunken driving. As I have mentioned in my prior columns, this involves ensuring that the sensors have sufficient redundancy and are resilient in the real-world. This requires testing the AI systems to make sure that they will not allow buggy behavior to crop up. This requires layers of safety systems that check and re-check the actions of the AI and the self-driving car and its machine learning, double checking the actions to try and ensure that there is a bona fide reason for the movements made by the system. And so on.

Second, we have to acknowledge that drunken behavior can occur by self-driving cars. There are way too many self-driving car makers that are using the head-in-the-sand approach and pretending that this can never happen. As I have mentioned in this piece, avoiding it is not the answer. In the end, it will happen and it could be the death knell for self-driving cars.

Third, we need to consider the role of the human driver in the self-driving car. I know that most of the self-driving car makers insist that if the self-driving car were to exhibit drunken driving that they assert that the human driver of the car is responsible for the driving of the car and the human driver needs to takeover the controls of the car. This is the case for levels 0 to 4 of self-driving cars (see my article about the Richter scale for self-driving cars levels), but even there it is a misleading and dangerous claim.  How is the human driver to know that the self-driving car is making a mistake? Maybe swerving or stopping makes sense to do in a given situation. Even if the human driver realizes that a drunk driving act is occurring, will they be able to react in sufficient time to avoid a calamity?

For level 5 self-driving cars, the viewpoint is that there isn’t any means for the human driver to take over control (which I have bashed that idea in my other columns), and so then the human occupants must just hope and pray that the drunken driving self-driving car does not injure or kill them.  Without any means to presumably override the self-driving car and its AI, the humans must blindly put their faith into what the self-driving car is doing.  This utopian view of self-driving cars is often promulgated by pundits of self-driving cars. It’s a scary belief and one that they are deluding themselves and the rest of the world into believing.

Fourth, we have to decide whether or not we want to allow some kind of external control over our self-driving cars. There are some that believe that once we have pervasive V2V (vehicle-to-vehicle communications), we will have self-driving cars that operate as a collective. They communicate with each other and regulate each other. Presumably, in this realm, if a self-driving car detected that another self-driving car was driving drunkenly, it could warn that self-driving car and/or even override what it is doing.  Do we want that to happen? It has both promise and peril. Suppose the other self-driving car is wrong, and it mistakenly or “drunkenly” tells a non-drunk self-driving car to take a bad action?

Likewise, if we have any kind of centralized control of self-driving cars, it can be a boon or it can be an adverse Big Brother.  Some believe that police, for example, should be able to take over the controls of self-driving car. Imagine that if bank robbers are trying to get away in a self-driving car, the police could just route the car to a local police station.  Of course, the government could potentially use that power for other more nefarious purposes.

Should a drunken driving self-driving car get a DUI ticket?  As preposterous as that seems, we do need to consider what will happen when a self-driving car acts in a wanton fashion. We need to have some means to try and get that self-driving car fixed, such as if the sensors are faulty or if the AI is buggy. The occupants of the self-driving car might be blissfully unaware that their self-driving car is acting in this fashion. Hopefully, with the right kinds of on-board detection and safety systems, the self-driving car will alert the occupants that something is amiss. If not, it might be that the police or some authority notifies the occupants that their self-driving car needs to “sober up” and get fixed.  In any case, the point here is that we need to be realistic about the aspect that self-driving cars will have the potential for acting in a manner that we would perceive as drunk driving.  Let’s take precautions to anticipate this outcome. Drive safely out there.

This content is original to AI Trends.