Use Cases Of Wrong Way Driving and Needed AI Coping Strategies For Autonomous Cars


By Lance Eliot, the AI Trends Insider

I sheepishly admit that I have been a wrong way driver. There have been occasions whereby I drove the wrong way, doing so luckily without leading to any undesirable outcome, and for which I certainly regret having mistakenly gone astray. It has happened on several instances inside parking garages or parking lots. I’d dare say that many have made the same error and were likely as confounded by poor signage and convoluted paths as I had been. Fortunately, I didn’t go up a down alley and nor did I go down an up alley. I just ended-up going against the alignment of cars in parking spots and quickly realized that I must be going in the wrong direction.

When you suddenly realize that you are heading in the wrong direction, it can be relatively disorienting.

How did I get mixed up? Did I miss seeing a sign that warned about going in this direction? The next thought that you have is what to do about the situation.

Should you continue forward, even though there is now a chance that you’ll come head-to-head with a car that is going in the correct direction? Or, should you back-up, which at least then has your then car going in the proper direction, but a lengthy effort of backing up can have its own dangers.

You can also potentially stop the car wherever you happen to be. At least a stopped car would hopefully be less chancy of sparking a car accident than one that is in-motion and going the wrong way. I’m not suggesting that being stopped is necessarily a safe idea and it could still put you and other cars in danger. Even if you do come to a stop, you obviously cannot just sit there until the cows come home and will ultimately need to decide what to do about the situation.

For most people, I’d bet that they usually are quick to consider turning around. In essence, as soon as practical, try to get your car turned around and headed toward the proper direction. You might do so by coming to a stop first, and then progressively try to make a U-turn by going back-and-forth, assuming that the space in which turning around is tight. If there is abundant room to turnaround, the matter of doing so becomes quite simplified and involves making a U-arch in as swift a movement as you can.

It always seems that just as you start to turnaround the car, another car will come toward you. They then wait for you to make your turnaround. You can usually feel the eyes of the other driver boring at you as you “waste” their time while turning around. The other driver probably thinks that you are quite a clod to have gotten yourself into such a predicament. I even had one driver that honked their horn at me when I was in one instance of turning around – I failed to understand the value of honking the horn since I obviously already knew that I was going in the wrong direction and was trying to rectify the circumstance. Maybe the driver was honking their horn in appreciation for my valiant efforts of turning around (I realize that is the glass-is-half-full perspective of the universe).

Fortunately, I’ve not personally gone the wrong way on a freeway, nor on a highway or a regular street.

Deadly Serious Cases Of Wrong Way Driving

I’ve certainly known of such wrong way instances that were committed by others.

Just about month or so ago, a wrong way driver at 2:00 a.m. got onto two of the major freeways here in Southern California, the I-5 and the I-110, and proceeded to drive at speeds of 60 to 70 miles per hour.

The crazy driver side swiped some other vehicles during the ordeal. The police were brave and actually chased after the driver.

It is one thing to be a police officer chasing a speeding car that is going in the proper direction, which already includes a lot of danger, but imagine the heightened risks of chasing after a driver that is going the wrong way and at high speeds. The late hour was fortunate since there wasn’t much traffic on the freeways and the driver ultimately was caught (they were DUI, plus driving a stolen car).

I’ve personally confronted situations involving a wrong way driver coming at me.

One of the most scary and vivid such memories involved a vacation trip to Hawaii with my family. We had rented a car on Maui and were driving around to see the beauty of the island. Going along one of the major highways, Haleakala Highway, there was a grass median that separated the westward side from the eastward side of the road. The grass median was banked and the other side of the road was several feet higher, allowing therefore for seeming protective coverage from anyone veering into the other side. There wasn’t any fence or structural barrier dividing the two directions.

The kids were having a great time in the back of the car and relished our being in Hawaii. As I attentively watched the road up ahead, all of a sudden, I saw a car from the upper banked roadway that erratically veered across the grassy median and was now entering into my stretch of road, coming straight at me, barreling along at around 50 to 60 miles per hour toward me. Since I was going the same rate of speed, we were quite rapidly approaching each other, completely going head-to-head.

This is one game of chicken you never want to be involved in.

It was one of those moments in life where time seems to nearly standstill. It was happening so fast that I wasn’t even mentally able to digest it fully. My instincts were to try and avoid the car by myself veering onto the grassy median, figuring maybe that was the safest place to be. I could have veered to my right into the slow lane of the highway, but I thought I’d still be a target of the wayward driver. I guessed that maybe the nutty driver might opt to switch into the other lane, perhaps desperately trying to avoid the head-on collision of our cars, and so the grassy median might have been clear. I doubted that we would both meet in the grassy median and was guessing that the other driver would stay on the highway, even if going in the wrong direction.

Just as I was about to make a “panic” swerve up onto the grassy median, in a split second of amazement, I observed the other driver doing the same. I decided to therefore stay in my lane and veer toward my slow lane, aiming to provide as much space between me and the other driver. Sure enough, we zipped past each other with just a few feet to spare. He was on the grassy median and then proceeded further upward and returned to his proper lanes.

The whole matter transpired in a few seconds and I almost doubted my own sanity that it even happened at all. There wasn’t any other traffic nearby and so there weren’t any other third-party witnesses. The other driver had utterly threatened my life and the lives of my family. Meanwhile, the kids in the backseat were oblivious to the ordeal and had kept laughing and singing throughout those highly tense brow-sweating moments.

I’ll never know what was in the mind of the other driver. Why did they come down onto my stretch of the highway? What made them opt to go back onto the grassy median, rather than somehow trying to stay going on the highway in the wrong direction? Did this all happen by “accident” in that the driver somehow just messed-up, or was this some kind of intentional act for “fun” or “sport” that the driver had in mind? It only took a few seconds for the entire sequence to reveal itself, and yet to this day I remember it as though it took hours to occur and forever will be one of the scariest driving moments of my life.

Wrong Way On A Runway

There was one other notable wrong way “incident” that I was directly involved in, though it turns out that I was not in imminent peril, including that my luck held true and nothing untoward happened. This one is rather incredible and certainly beyond the norm.

Years ago, I was doing research on the cognitive capabilities of air traffic controllers as part of a research grant focusing on the Human Computer Interface (HCI), also sometimes referred to as Human Machine Interaction (HMI). The questions being explored involved how the air traffic controllers made use of their radar systems for tracking air traffic. How much did the air traffic controller need to keep in their mind? To what degree did the radar scopes aid or hinder their ability to route air traffic? What kinds of improvements could be made in the radar systems and the interface so that it would enhance the abilities of the air traffic controllers?

I had at first had air traffic controllers come to our research lab at the university and take various cognitive tests. It was impressive how much of a 3D mental model they could create in their minds, unaided by any system at all. I would tell an air traffic controller that a plane was entering into their air space at such-and-such speed and going in such-and-such direction at such-and-such height. I would continue to add more such flights into the airspace, all imaginary, and wanted to see how many such flights they could mentally handle. The twist too was that the air traffic controller was to imagine where the planes are as time ticks along. It is now say five seconds since those planes each entered into your air space, and I’d ask them where each plane was and whether there was any danger of planes colliding.

Eventually, I realized that it would be advantageous to go observe the air traffic controllers in action. I got permission to go watch the air traffic controllers at LAX (Los Angeles International Airport), considered one of the busiest airports in the United States. These air traffic controllers were considered the top echelon of air traffic controllers, often having worked their way up from other much smaller airports that had much less air traffic and complexity.

I wanted to contrast the top air traffic controllers with those that were still working their way up the controller ladder. So, I got permission to visit a relatively small airport and observe the air traffic controllers there. A fellow researcher and I drove out to the airport together. It was a very foggy night and when we arrived at the airport the fog cloaked most of the airport. We arrived at the airport gate and the security guard told us we could drive directly out to the airport tower. He cautioned us to make sure that we obey all traffic signs and drive at a slow speed. This seemed prudent to us and we agreed to do so. My fellow researcher was driving the car at the time.

Well, before I say what happened next, allow me to offer my personal “excuse” about what was taking place so that you won’t judge me too harshly. It was so foggy that you could hardly see your hand in front of your face. We drove along at some snail-like speed of maybe 3-5 miles per hour and kept our eyes peeled. We had rolled down the windows of the car in hopes of being better able to see through the fog. The headlights were bouncing their light off the fog particles and we really could not see much of what was ahead of us.

While crawling along, we began to see a colored light embedded in the roadway just a few feet up ahead of us. We could also see some painted lines on the roadway.

Turns out, we were driving on a runway!

That’s a rather stunning wrong way story, I believe. How many people do you know that have driven their cars onto a runway? When we realized that we were on a runway, you can imagine that the blood drained from our faces and we both looked at each other in shock. The fog was so thick that we hadn’t realized we had meandered onto a runway and we also had no idea which direction would get us clearly off the runway. It turns out too that it was considered an “active” runway that planes could take-off or land upon. Fortunately, the thick fog had temporarily closed-off any flights from landing or taking off.

Of course, I’m alive today to tell the story, and we were able to eventually find the road that led to the airport tower. For a few moments though, we had an encounter of a frightful nature and agreed not to tell anyone about it at the time. Our personal code of a “statute of limitations” on speaking of the matter has run its course and so I am able to tell the story now. I chock the whole experience to the braze nous of youth.

Our Collective Fascination With Wrong Way Driving

One last quick aspect about driving the wrong way. As a society, we seem to have a fascination with wrong way driving. There are numerous movies and TV shows that depict driving the wrong way. It seems that nearly any cop related movie or spy related movie that is a blockbuster has to have its own car chase that involves going the wrong way. One of my favorite such scenes occurred in the movie Ronin, encompassing an elaborately staged and lengthy sequence of going the wrong way on freeways and in tunnels, etc.

In terms of why people drive the wrong way, here’s some reasons:

  •         Drunk driving
  •         Confused driver
  •         Inattentive driver
  •         Shortcut driver
  •         Thrill-seeker driver
  •         Etc.

There has been extensive research about how to design off-ramps and on-ramps to try and prevent confused or inattentive drivers from going the wrong way. It can be relatively easy to get confused when driving in an area that you are unfamiliar with and inadvertently go up an off-ramp. Going down an on-ramp is usually a less likely circumstance since the car driver would need to make some sizable contortions to get their car positioned to do so.

Going the wrong way on a one-way street would be another common means of wrong way driving. I knew one fellow student in college that used to take a one-way street the wrong way in order to get to campus faster.

He loudly complained that the right-way was more convoluted and added at least ten to fifteen minutes to his driving commute. According to him, the one-way was rarely used by other drivers and so he felt comfortable going the wrong way on it. In this case, he was convinced that there was nothing wrong with his shortcut and the “problem” was that the city improperly allocated the street as a one-way in the wrong direction. As far as I know, he lucked out and never got into a car accident on that one-way. He was proud of the fact that he drove that wrong way for several years and never once got a ticket (well, he never got caught).

The point being that there are some cases whereby a driver goes the wrong way by intent. My fellow student did so as a shortcut, though I always suspected that maybe he was a bit of a thrill-seeker and got a kick out of going the wrong way. His efforts were completely illegal. He endangered not only himself, but anyone else that was in his car during his trickery and could have endangered any cars that were driving the right-way on that one-way street.

When I was with my family in Hawaii, we had another “wrong way” circumstance arise, though it was thankfully much less eventful than my head-to-head situation.

We were heading up to a remote waterfall and we had to take a winding road that made its way through a thick jungle. I had rented a jeep, just in case the road became difficult to drive on. There was one road that was a one-way up to the waterfall, and a second road that was a one-way down from the waterfall (each being one lane only).

The rental agent handed me the keys to the jeep and then offered a word of advice. She told me that portions of the winding road were washed out by recent storms. As such, there would be areas that I would have to drive on the other road, the one that went in the opposite direction.  I was a bit dismayed at this bit of news. I clarified that she was telling me to illegally drive, doing so by going the wrong way. She shrugged it off and said that everyone knew about it and it was usable and practical advice.

Statistics About Wrong Way Driving Related Deaths

There are a mixture of circumstances involving drivers that go the wrong way by mistake while other situations involving a driver that intentionally goes the wrong way. Those that are intentionally going the wrong way might do so under-the-table and without any authority to do so, while in other instances it is conceivable that a driver might be purposely instructed to go the wrong way.

According to statistics by the NHTSA (National Highway Traffic Safety Administration), there are about 350 or so deaths per year in the United States due to wrong way driving. Any such number of deaths is regrettable, though admittedly it is a relatively smaller number of deaths than by other kinds of driving mistakes (there are about 35,000 car related deaths per year in the U.S.). There doesn’t seem to be any reliable numbers about how many wrong way instances there are per year and such instances are usually unreported unless there is a death involved.

The total number of miles driven in the United States is estimated at around 3.2 trillion miles per year. One would guess that driving the wrong way happens daily and amounts to perhaps some notable percentage of that enormous number of driving miles.

Fortunately, it would seem that the number of actual accidents due to wrong way driving is quite small, but this is likely due to the wrong way driver quickly getting themselves out of their predicament and also the reaction of right-way drivers to help avoid a collision. In essence, it might not be happenstance that the wrong way driving doesn’t produce more problems. It seems more likely that it is due to human behavior of trying to overt problems when a wrong way instance occurs.

AI Autonomous Cars And The Matter Of Wrong Way Driving

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars. There are two key aspects to be considered related to the wrong way driving matter, namely how to avoid having the AI self-driving car go the wrong way, and secondly what to do if the AI self-driving car encounters a wrong way driver. Plus, a bonus topic, namely what about having an autonomous car go the wrong way, on purpose, if needed (which, for some, seems outright wrong, since they ascribe to a belief that driverless cars should never “break the law”).

Allow me to elaborate.

I’d first like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Use Case: Autonomous Cars Goes The Wrong Unintentionally

Returning to the topic of driving the wrong way, let’s first consider the possibility of an AI self-driving car that happens to go the wrong way.

Some pundits insist that there will never be the case of an AI self-driving car that goes the wrong way. These pundits seem to think that an AI self-driving car is some kind of perfection machine that will never make any mistakes. I suppose in some kind of utopian world this might be the case, or perhaps for a TV or movie plot it might the case, but in the real-world there are going to be mistakes made by AI self-driving cars.

You might be shocked to think that an AI self-driving car could somehow go the wrong way. How could this happen, you might be asking. It seems incredible perhaps to imagine that it could happen.

The reality is that it could readily happen.

Suppose there is an AI self-driving car, dutifully using its sensors, and is scanning for street signs, but fails to detect a street sign that indicates the path ahead is considered a wrong way direction. There are lots of reasons this could occur. Maybe the street sign is not there at all and it has fallen down, or vandals had taken it down a while ago. Or, it might be that the street sign is obscured by a tree branch or maybe it is so banged up and has graffiti that the AI system cannot recognize what the sign is. Maybe the sign can be only partially seen and does not present itself sufficiently to get a match to the Machine Learning (ML) that was used to be able to spot such signs. Perhaps the weather conditions are such that it is heavily raining, and the sign cannot be detected or perhaps it is snowing and there is a layer of snow obscuring the signs. And so on.

I assure you, there are lots of plausible and probable reasons that the AI might not detect a street sign that warns that the self-driving car is about to head the wrong way.

For my article about AI and street signs detection, see:

For my article about street scenes detection, see:

For my article about defensive driving for AI, see:

You might be thinking that it doesn’t matter if the AI is able to detect a sign, since it would certainly have a GPS and map of the area and would realize that the road ahead is one that would involve going the wrong way.

Though it is certainly handy for the AI to have a map of an area and a GPS capability, you cannot assume that a map will always be available and also that the GPS won’t necessarily have anything to do about warning of a wrong way up ahead. Currently, the focus for the auto makers and tech firms involves developing elaborated maps of localized areas and then having their trials of the AI self-driving cars take place in a geofenced area.

Once we have widespread AI self-driving cars, I don’t think we should be basing their emergence on having mapped every square inch of the world in which they are driving. There are many that are trying to do so, but I am saying that a true Level 5 self-driving car should not be dependent upon having a prior map of wherever it is going. I assert that humans drive in places whereby the human driver has no map at all beforehand, and yet they are still able to sufficiently drive a car. That’s the target of a Level 5, in my opinion, namely being able to drive a car in the manner that a human can drive a car.

For my article about robotic navigation without maps, see:

For more about the cartographic efforts taking place, see my article:

For the importance of LIDAR and maps, see my article:

In short, I am claiming that there are going to be circumstances in which an AI self-driving car is going to end-up going the wrong way. This would happen due to the AI not being able to discern the roadway situation and not having a prior map that would otherwise forewarn that a wrong way is up ahead.

You might still fight me about this notion, but I’ll add another twist to see if I can convince you of the possibility of an AI self-driving car getting caught up going the wrong way. Remember earlier that I mentioned I have gone the wrong way in various parking structures and parking lots? I’d be willing to bet that the same kind of wrong way heading could happen to an AI self-driving car.

I doubt that parking structures and parking lots will be mapped to the degree that our freeways, streets, and highways are. As such, the AI self-driving car when encountering a parking lot or parking structure, might well end-up failing to spot signs about the proper direction and could get itself mired in going the wrong way.

A techie might respond by saying that the parking structure or parking lot opt to have some form of electronic communication that would provide directions to the AI self-driving car. I agree that we might well see such electronics being added into all kinds of structures or buildings into which an AI self-driving car might be able to drive. But, I wouldn’t bet on it always being available, and furthermore even if it happens the odds are that it will take place slowly over time, and meanwhile there will be structures that do not have such a communications setup.

I’ll offer one other comment about this notion of going the wrong way. Are you willing to bet that there will never be a situation involving an AI self-driving car that finds itself going the wrong way? I ask because if the AI self-driving car is not ready to tackle such a predicament, and it is because you are so sure that it will never happen, well, I’d not like to be in or near that AI self-driving car that has gotten itself into such a fix and then is unaware of it or does not know what to do about it.

The auto makers and tech firms are so busy trying to get AI self-driving cars to simply drive the right way on roads, they generally have considered this aspect of dealing with driving the wrong way to be an edge problem. An edge problem is one that is not considered at the core of what you trying to solve. We’re not quite so convinced that this should considered an edge per se and that it might well happen more than you might think.

For how some AI developers put their heads-in-the-sand on such matters, see my article:

For the nature of edge or corner cases in AI self-driving cars, see my article:

AI Coping With Going The Wrong Way

A proficient AI self-driving car needs to be able to detect that it has gotten itself into a wrong way driving situation.

The detection can potentially occur by the sensory input and interpretations of the AI. Once you are immersed in a wrong way driving situation, there are often telltale clues that something is amiss. As mentioned earlier in my story, the wrong way in a parking lot can be potentially detected by realizing that the parked cars are parked in an orientation away from your direction of travel. Another might be that there are other roadway signs for which the AI self-driving car is only seeing the backside of the sign, or other signs that have arrows that point in a direction other than the direction of the AI self-driving car.

The AI might also detect cars that are coming straight toward the self-driving car, similar to my example earlier of the game of chicken that I had with a wrong way driver. There might be other surroundings related aspects such as the movement of pedestrians, the motion of bicyclists, and other environmental aspects that can be used to detect a wrong way situation.

The sensor fusion is crucial at this juncture since it is often difficult to ascertain via one indicator alone that the AI self-driving car is going the wrong way. It might be a multitude of indications coming from a multitude of the sensors, all of which needs to be combined and considered during the sensor fusion portion of the AI system driving task.

It could be too that the AI self-driving car might get alerted via electronic communications. There could be other AI self-driving cars nearby and those self-driving cars might have detected that your AI self-driving car is going the wrong way. They could potentially communicate via V2V (vehicle-to-vehicle communication) and let the AI of your self-driving car know that it is heading in the wrong direction. There might also be V2I (vehicle-to-infrastructure) that could also be alerting the AI of the self-driving car.

If the AI somehow becomes aware of the matter, it then needs to update the virtual world model and prepare an action plan of what to do. This is where the AI might then opt to do the same actions that a human driver might do in such a situation, including coming to a stop, or perhaps moving ahead slowly, or maybe trying to execute a U-turn, etc. The AI needs to determine what is the prudent and safe approach to get itself out of the predicament.

One other consideration in this matter involves the role of a human occupant that might be inside the AI self-driving car. So far, we’ve assumed that the AI is doing the driving task and doing so without any interaction with humans. I’ve predicted that the AI of self-driving cars will be interacting extensively with human occupants, doing so for conversational purposes and at times for aspects related to the driving.

I am not suggesting that the human occupants will be guiding the AI as to the driving task. That’s not what should be happening in a true Level 5 self-driving car. It is up to the AI to drive the car. But, this does not mean that the AI cannot interact with the human occupants and thus perhaps alter or shape the driving based on that interaction. If you were being driven a human chauffer, you would likely interact with the person, and yet the chauffer is still the driver and has direct and sole access to the driving controls.

It could be that a human occupant might notice that the AI self-driving car has gone the wrong way. In which case, what should the human occupant do? Presumably, the human occupant could engage the AI in a dialogue and indicate that the AI has gone the wrong way. This would likely be an urgent discussion. The AI cannot though blindly assume that the human occupant is correct, in the same sense that a human chauffer would not necessarily blindly believe or abide by whatever a passenger in the car might say.

For human and AI conversational aspects, see my article:

For my article about in-car voice commands, see:

For my article about explanations and AI self-driving cars, see:

Use Case: Autonomous Car Going The Wrong Way Purposely

I’d like to provide an additional variation on the rational for the wrong way driving of an AI self-driving car. There might be situations wherein the AI self-driving car is purposely supposed to go the wrong way. Remember my personal example that I mentioned about driving a jeep in Hawaii to get up to a waterfall? In that case, I was told by an authority figure that there might be portions of the road that would involve my having to go the wrong way.

There have been other situations involving my being told to drive the wrong way. A car accident had blocked part of a major coast highway and the police were directing traffic to go the wrong way on a diversion street. They were forcing traffic to go up a one-way road in the wrong direction. I admit a bit of hesitation when I abided by the police officer’s instruction, but I figured the police knew what they were doing. In this case, the police had made sure that this was a safe path to take.

I mention this aspect because suppose that an AI developer has decided that an AI self-driving car should never go the wrong way. This might be done under the naïve belief that since it is dangerous and wrong for an AI self-driving car to go the wrong way that it should be restricted from ever doing so. There are going to be circumstances that involve an AI self-driving car driving “illegally” by doing something such as going the wrong way. This is yet another reason why the AI needs to be prepared to do so.

For my article about AI self-driving cars having to do illegal driving acts, see:

Now that we’ve covered the aspects of a wrong way driving AI self-driving car, let’s shift our attention to the situation of an AI self-driving car that is confronted by a wrong way driver.

Use Case: Wrong Way Driving Cars

I think you can probably agree with me that there is a likely chance that an AI self-driving car might ultimately encounter a wrong way driver. As I’ve mentioned, there is going to be a mix of human drivers and AI self-driving cars, occurring for many years. The odds of a human driver going the wrong way towards an AI self-driving car seems reasonably likely. It could happen because the human driver has made a mistake, or it could be that the human driver is drunk or DUI, or maybe the human driver is trying to getaway from the police, and so on.

What should the AI do?

It needs to first detect that the wrong way car is headed towards it. Once this detection has occurred, the virtual world model needs to get updated. The AI action planner then can consider various scenarios of how the wrong way situation might play out. This is similar to my harrowing story of being in Hawaii and facing a wrong way driver.

The AI system might decide that it is best to continue forward “as is” or it might decide to take an evasive action. This all depends upon the situation at hand. Is there other nearby traffic? How soon will a collision happen? Which approaches seem to offer the best chances for survival? Etc.

For my article about driving styles, see:

For the aspect that the other driver might be exhibiting road rage, see:

For the aspects of maneuverability of AI self-driving cars, see:

For the importance of probabilities in AI, see:

The AI might also be able to confer with other nearby AI self-driving cars. Again, via V2V, it could be that the AI of your self-driving car might either become aware of the wrong way driver by being warned by some other AI self-driving car, or it could be that several AI self-driving cars might band together, momentarily, in an effort to deal with the wrong way driver situation.

There are also the ethical aspects involved in the matter of the AI trying to determine what action to take.

As per the famous Trolley problem, an AI system in such a situation might need to make a “decision” that involves trying to minimize loss of life, and yet it is somewhat ambiguous as to how the AI is supposed to do so. Should the AI opt to swerve into the next lane to avoid the head-on collision with the wrong way driver, but it could be that by swerving into the next lane that the AI self-driving car will collide with another car that is heading in the correct direction. These other innocents in that car might get killed, due to the wrong way driver and due to the choices made by the AI about contending with the wrong way driver.

It might be that any action taken by the AI, even taking no particular action, might end-up with an unavoidable crash. What is the basis for making such a decision?

For more about the Trolley problem and ethical dilemmas for self-driving cars, see my article:

For AI self-driving cars working together, see my article about swarms:

For my article about IoT and self-driving cars, see:

Some final thoughts about the wrong way driver situations.

Suppose we do eventually have only AI self-driving cars and there are no human driven cars. What then? Well, I still contend that there is a possibility of having an AI self-driving car that can end-up going the wrong way. Thus, the AI systems of self-driving cars should have a provision for dealing with such situations.

I would guess though that by the time we would have all and only AI self-driving cars on our roadways, the odds are that we’d have lots of IoT (Internet of Things) and quite sophisticated V2V, V2I, and even V2P (vehicle-to-pedestrian) electronic communications. As such, the odds of an AI self-driving car going the wrong way would be substantially reduced.

Furthermore, the AI self-driving cars could likely by then coordinate sufficiently with each other in a manner that a wrong way AI car poses no particular concern per se. In essence, the AI of the wrong way driving self-driving car would coordinate with the other AI self-driving cars and in a somewhat seamless fashion get itself out of the predicament. The other AI self-driving cars could actively help to rectify the matter, perhaps slowing down to allow time for the wrong way AI to get itself righted or taking other proactive actions to assist.


I’ll end with this somewhat mind-boggling thought. In the movies and TV there are those car chases whereby the spy or crook goes the wrong way, and miraculously lives, doing so by magically avoiding the oncoming cars. This is something unlikely to be realistic in today’s world of human drivers. If you drove on the wrong way of a busy freeway or highway, I’d dare say that someone is going to get hurt.

In a world of only AI self-driving cars, I suppose you could say that it would be feasible to go the wrong way. Assuming that you have generally perfect V2V and the AI of the self-driving cars are all working in concert with each other, you could in theory purposely go the wrong way, even on a busy road, and do so without incurring any collisions.

Some might even say that this might be a means to more efficiently use our roadways. You could allow AI self-driving cars to use the same roads for both directions and let them figure out how to make it happen. The Golden Gate Bridge adjusts some of the lanes during the peak traffic times to allow for traffic to go one direction and then later in the day shift to the other direction. Nonetheless, there is still only one direction allowed at a time. Imagine a situation whereby we allowed self-driving cars to go in any direction on our roads and go straight head-to-head with other self-driving cars.

Even if this seems theoretically possible, I’d suggest that if you were a passenger in such a self-driving car, you’d have a tough time watching this occur. Then again, will we be so accustomed to believing in the AI of the self-driving cars by then that we would also readily accept them playing this game of chicken?

Hard to imagine.

For the foreseeable future, wrong way will be wrong way, and wrong way won’t be right way.

That’s my prediction.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.