Pseudo-Paralysis of AI Autonomous Cars

Cars lined up to pick up students after school face a challenging situation once the bell goes off. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]

I was in the woods with my family one day and it was getting towards nightfall.

We were up on a mountain that was only reachable via an aerial tram.

We had been forewarned several times by the tram operator as we came up for our outdoor romp that we had to get back to the tramway by sunset or we would be stranded up there for the night. It was just a simple day hike for us and so we had not packed any camping gear. I realize this sounds like a story by someone that later regrets they had not brought their overnight gear, but in this case, we really genuinely were determined not to be there at night time, I swear it.

Anyway, we started to hike back toward the tramway with plenty of time to spare and could gauge how much sunlight we had left. The kids were enjoying the trip and there was a smattering of silvery snow on the ground. It was cold enough to have lightweight winter jackets on, but not so cold that you could see your breath. That being said, the temperature was dropping rapidly as darkness neared. We gradually picked up the pace and opted to move along more stridently, rather than stopping to look at every majestic tree and fallen pine cone.

Just as the tramway came into our sight, which was like reaching the pot of gold at the end of the rainbow, we also saw something else that was at the opposite extreme of delight, a wolf. Turns out that a full-grown wolf had edged out of the woods onto the path that led to the tramway.

He was facing us.

We were facing him.

I had no provision to be able to fight off the wolf, since I merely had the clothes on my back and nothing more. I was leading the way on this path and so saw the wolf before the rest of the family did so.

I signaled to my family to come to a stop. They at first thought I was kidding around, but they could see the seriousness and sternness of my facial expression. I whispered at them to quit goofing around and just stand still. The kids were very young, and so they were both frightened and yet also “excited” that something unusual was happening (I suppose if I was really brave, I would have wrestled with the wolf, right there in front of the kids, what a mountain man I would have been!).

Anyway, I was trying not to take my eyes off the wolf. Some say that you should stare down a wild animal, others say don’t make direct eye contact. It’s also said that it can be contextually based, as to the nature of the animal and the circumstances involved.

I knew this much, I wanted to know where the wolf was.

Would it dart towards us? Would it meander? Would it quietly go back into the woods? Were there more wolves and this was just one of them? Was there an entire wolf pack surrounding us and this was the first one to show itself? If I yelled, would it scare off the wolf? If I yelled, would it instead cause the wolf to attack? Why would a wolf come this close to the tram station? Was it a domesticated kind of wolf that was used to being around people? Etc.

The rest of the family was watching me and watching the wolf. We were all standing still, including the wolf. It was some kind of momentary standoff. The kids were squirming but generally as still as young children can be. I was concerned that even trying to talk about the wolf and the situation might somehow spark the wolf into action. We all remained silent. In the woods. On a mountain. Nearing sunset. With no one else left around.

I didn’t see anyone yet at the tram station. One thought was that if we all just stayed frozen in position, maybe the tram was on its way up for the last haul of the day, and when it arrived the wolf would dart away. Even if the tram arrived and we were all still stationary, I figured we were close enough to the tram station that we might be able to get the attention of the tram operator. Hopefully, the tram operator was prepared for and used to having wolves in this area, and would know what to do.

You could say I was paralyzed.

Of course, I wasn’t paralyzed in the sense that my arms weren’t broken or my legs were not working.

I was fine physically.

We all were. I’d dare say the wolf looked to be in good shape too. We were paralyzed in the manner of none of us moving, and none of us yet willing to make a move. It was a situation in which we were paralyzed in place, and “frozen” without any as yet identifiable viable move to best undertake.

Nor was I paralyzed in fear. Sometimes you lose your wits and become paralyzed. In this case, I had my wits, I had my fitness.

I’d like to think that the wolf was also looking us over and mulling over the same kinds of thoughts we were having.

Are those humans going to attack? Do they have food? Are they themselves food worthwhile to try and obtain? Will other humans come to their aid? Do they have a gun or other weapon? Are there more humans hidden in the woods? I realize that the wolf maybe wasn’t playing a game of chess with us, but in some manner, even if simplistic, it sure seemed like it too was trying to size up the situation and determine what to do next.

I refer to this as being “paralyzed.”

If you are uncomfortable that I use the word paralysis, which I realize many believe should only be used when you are truly physically debilitated, I can use instead the word pseudo-paralysis if that’s more palatable to you.

Suppose we do this, for the rest of this discussion, whenever you see me use the word paralysis, substitute instead the word pseudo-paralysis.

Hope that’s OK with you all.

In a moment, you’ll grasp why I’ve discussed the topic of paralysis and led you to a juncture of considering paralysis as a circumstance involving coming to a halt, being faced with seemingly difficult choices of what to do next, and remaining in a stopped position for some length of time.

I’ll quickly finish the story since I am assuming you are on the edge of your seat.

I didn’t want us to back-up since I thought it might cause the wolf to think we were weak and by retreating maybe it would come after us.

I didn’t want to go forward because I thought it would be perceived as an attacking threat.

I didn’t want to go sideways which would have led us into the woods and I figured that being among the trees would be more to the advantage of a cunning wolf than us day trip humans.

Seemed like quite a stalemate.

Fortunately, the wolf apparently grew tired of the standoff, and it wandered back into the woods.

We moved quickly over to the tram station and with great relief got onto the tram once it arrived.

AI Autonomous Cars And Paralysis

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. This also includes considering scenarios in which the self-driving car might find itself becoming pseudo-paralyzed due to a predicament or particular situation.

First, let me clarify that I am not referring to a circumstance involving the self-driving car having a malfunction.

Similar to my story, I am referring to a situation for which the AI has to make a decision about which way to go, and there doesn’t seem to be a viable choice at hand. This is different than having a physical ailment of some kind. Just as I was physically able to move around while facing the wolf, I was “paralyzed” with respect to the situation and what action to take. For the moment, taking no action seemed prudent.

There will be circumstances that an AI self-driving car freezes up due to some kind of potential malfunction, for which I’ve covered extensively in my article about the Freezing Robot Problem:

Herein, let’s assume that the AI self-driving car is fully able and can go forwards, backwards, turn, and the like.

You might be wondering what kind of a situation could arise then that would cause a functioning AI self-driving to become pseudo-paralyzed.

Example Of Self-Driving Car Pseudo-Paralysis

One of the most famous examples involves the early days of AI self-driving cars and their actions when coming to a four-way stop.

An AI self-driving car arrived at a four-way stop sign just as other cars did. The other cars were driven by humans. Even though the proper approach would normally be that whichever car arrives first then goes forward first, the other human driven cars weren’t necessarily abiding by this. It’s a dog eat dog world, and I’m sure you’ve had other drivers that have opted to force themselves forward and abridge your “right” to go ahead before they do.

The AI self-driving car kept waiting for the other cars to come to a full and proper halt.

Those other cars kept doing the infamous rolling stop. Each time that the AI self-driving car perceived that maybe it could start to go, one of the other cars moved forward, which then caused the AI to bring the self-driving car to a halt. You might have seen a teenage novice driver get themselves into a similar bind. They sit at the stop sign, politely waiting for their turn, which never seems to arrive.

You could say that this is a form of paralysis.

Admittedly, the AI self-driving car was fully able to drive forward. It could even go in reverse. It was a fully functioning car.

The predicament or circumstance was that it was trying to abide by the laws of driving, and it was trying to avoid a potential accident with any other car.

Under that set of circumstances, it become pseudo-paralyzed.

Perhaps you can see now how my story about being in the woods and spotting the wolf relates to this – I was fully able to move, but the situation seemed to preclude doing so.

School Driving And Paralysis

There’s another example of an AI self-driving car paralysis that was recently reported about the real-world trials being undertaken by Waymo in Phoenix, Arizona.

Reportedly, one of their AI self-driving cars drove to the school of a family that was participating in the trial runs and waited for the school children to be released from the school.

You’ve maybe done this or seen this before, wherein cars sit waiting for the bell to ring and the school children to come flying out of the classrooms, and kids will pile into the waiting cars.

If you’ve not had an opportunity to be a human driver in the school setting of this kind, I assure you that it can be one of the most memorable times of your driving career (well, maybe not fondly memorable!).

I used to endure the same situation when I was picking up my children from school.

When the cars first arrived to the school, prior to the bell ringing, it is relatively quiet and everyone jockeys to find a place to temporarily park. Some leave their motor running; some turn off the car. Some read a book while waiting, some watch the school intently. Some actually get out of their cars, as though it is a taxi line at the airport, and converse with other fellow parents waiting likewise to pick-up their children.

That first part of the effort is relatively easy.

The main aspect is that you need to be careful about where you park, and that you don’t cut-off someone else or disturb what has become a kind of daily ritual with everyone seemingly knowing the “rules” about where to park and wait. It can be an unavoidable death sentence to anyone that decides to squeeze their car in front of everyone else that has already been waiting for the last twenty minutes or so. I’m sure the person would be dragged out of their car and beaten senseless.

Well, the real excitement happens when the kids burst out of the classrooms. Everyone starts their car engines as though it is the start of an Indy car race. The kids weave in and out of the parked cars to get to their parent’s car. Some kids take their time and end-up blocking other cars. Some kids run to their designated car but meanwhile get confused and maybe bounce off someone else’s car. The parents will try to maneuver their car closer toward their child. It becomes a free-for-all. Measured chaos, or worse.

Well, apparently, an AI self-driving car from Waymo found itself in such a situation.

The AI self-driving car reportedly became pseudo-paralyzed.

Whichever way it might go, there were nearby objects. Other cars were blocking it. Children were blocking it. Probably other parents were walking around trying to help the children, and they were blocking it too. No means to move. Notice that the AI self-driving car was fully functioning, and it could have driven in any desired direction, but the situation precluded doing so.

If the AI self-driving car had tried to move forward, it might have hit someone or something.

If it tried to back-up, it might have hit someone or something.

If it turned to the left or turned to the right, it might have hit someone or something. All told, it was a kind of stalemate.

Just like with the wolf, it became a wait and see what will happen in the environment that might allow for breaking out of the stalemate.

You might be saying that the AI was just trying to be cautious.

It could have run over the children or parents; it could have rammed into the other cars. Let’s concede that indeed it could have moved if it intended to do so.

Fortunately, the AI was apparently well-programmed enough that it realized those were not seemingly viable options in this case. The need to avoid hitting these surrounding objects had kept the self-driving car from moving.

One current criticism of AI self-driving cars is that they are perhaps overly cautious.

They are actually skittish, which can be a limiting factor when driving a car.

If you’ve seen a teenage novice driver trying to drive in a busy mall parking lot, you might know what I mean by skittish. The novice won’t drive down a parking lane because there are people walking to and fro. There are cars backing up. There are cars waiting in the parking lane and it becomes dicey to squeeze around them.

Skittish Autonomous Cars

Do we want our AI self-driving cars to be skittish?

This can be a “safe” way to drive, one might argue, but it also means that there will be lots of real-world driving situations that will inhibit the self-driving car and it will become possibly paralyzed. Imagine the frustration of other human drivers at the skittishly driven car – they honk their horn, and can be blocked by the paralyzed car and unable themselves to move along. Pedestrians can be confused too. Is that self-driving car going to move or not move?

There are some that even have been playing tricks on AI self-driving cars.

You can get some of the AI self-driving cars to come to a halt, simply by standing at the curb and waving your arms frantically as it gets close to you, while it is driving down the street. The AI self-driving car will likely slow down, and in some cases even come to a halt. This is partially because the AI developers have opted to establish a kind of protective virtual bubble around the self-driving car. If there is anything that nears the bubble or comes into the bubble, there’s a chance that the self-driving car will hit it, so the safest bet by the AI programmers seems to be have the self-driving car slow down or come to a stop.

This is considered an essential deployment of the “first, do no harm” principle of the AI being developed by most of the automakers and tech firms.

Driving the car is essential, but harming people or destroying things is a big no-no. Thus, make the protective virtual bubble as large and encompassing as you can. Don’t scrimp on the magnitude of the bubble. Make the bubble big so as to reduce the risks of causing injury or death to the smallest amount that you can.

Humans don’t typically drive this way.

Humans have seemed to be able to refine their driving practices to take things to a much closer margin. I realize you might say that’s why there are car accidents and people that get run over by cars. True. But, on the balance, there seems to have been a “societal dance” that by-and-large has been established of driving our cars and doing so within an inch of others, and meanwhile most of the time there aren’t injuries and deaths.

I recently went to a baseball game and parked in a very busy parking lot. The entire time in the parking lot, while driving around to find a parking spot, people were not only super close to my car, many people at times touched my car (transgressions!). When I finally found an open spot, I pulled into it, and was within a scant inch or so of the cars on either side.

Most of the AI self-driving cars would become “paralyzed” with that kind of closeness.

There’s going to be a delicate ratcheting up of the risk aspects to allow for closer movement. Human occupants in an AI self-driving car aren’t going to be satisfied that their AI self-driving car has come to a halt and is going to wait say thirty minutes for everyone else in a parking lot to get into or out of their cars and clear out the lot before the AI will instruct the self-driving car to move again. We’re going to expect that the AI can drive like a human can, which means being able to navigate these kinds of situations.

See my article about the foibles of human drivers and the AI self-driving car practices:

See my article about defensive driving for AI self-driving car practices:

Scenario Analysis

In the case of the AI self-driving car among the school children, what should the AI have done?

Let’s first consider the four-way stop sign scenario.

In that situation, the AI self-driving car likely should have played chicken with the other human driven cars and opted to move forward, showcasing that it was wanting to move along. The other human driven cars would inevitably have backed-down and allow the AI self-driving car to go ahead. It was the omission of a clear cut indication that the AI self-driving car was going to “aggressively” make its move that the other human driven cars figured they would just outdo or outrun it.

Some would say that if there’s a politeness meter related to the AI, it’s time to move the needle towards the impolite side of things. Human drivers can be quite impolite. They get used to other drivers being the same way. Therefore, if they see a polite driver, they figure the driver is a sheep. It is worthwhile to be the fox and treat the sheep like sheep, so the impolite driver figures. Right now, AI self-driving cars are perceived as the meek sheep. Easy to exploit.

Does this imply that the AI self-driving car should run amuck?

Should it barrel down a street?

Should it try to take possession of the roadway and make it clear that it is the king of the traffic?

No, I don’t think anyone is suggesting this, at least not now.

Also, let’s be frank, it’s harder to go the impolite route when right now all eyes are on AI self-driving cars and how they are driving.

The moment an AI self-driving car bumps or harms a human, or scrapes against another car, this is going to be magnified a thousand fold as a reason why AI self-driving cars are not to be trusted.

Suppose a human driver was on probation for having driven badly, and they were then on-notice that any tiny misdeed would have their license get revoked. Many of the AI developers are worried that the same thing is going to happen with the initial emergence of AI self-driving cars.

Let’s revisit the school children and picking up the kids at school. What do the parents do when they want to drive out of the morass of cars and kids? They usually edge forward, which is a signal to the other cars and the kids to get out of the way. This generally seems to work. It’s almost like being amongst a herd and you kind of make your own pathway while in the middle of the herd.

Rather than being paralyzed, these human drivers “push” their way out of the situation. Sure, some of them are momentarily “paralyzed” but they are overtly making their way through the crowded scene. This is somewhat akin to the practice suggested to alleviate the four-way stop paralysis too.

Time Factor Is Crucial

This brings up the importance of the time factor when referring to this pseudo-paralysis.

How much time has to be spent sitting still to declare a paralysis?

This is a hard thing to quantify across all circumstances and situations. If I’m in my car and waiting at a red light, I’ll need to do so for the time it takes for the light to turn green. Are me and my car paralyzed? I don’t think so. I’d suggest that we would all agree this is not quite the circumstance that we’re referring to when we discuss the paralyzed self-driving car.

Returning to the school children situation, suppose the AI self-driving car opts to be more aggressive. In doing so, it might bump into a child, or bump into another car. Obviously, that’s not desirable. You could say that the human parents could also though have done the same thing, they could have bumped into a child or bumped into a car. Fortunately, most of the time, they don’t. This is the kind of delicate maneuvering that a true AI self-driving car should strive to achieve.

See my article debunking the zero fatalities indications:

See my framework about AI self-driving cars:

Let’s consider other scenarios that might lead to paralysis of an AI self-driving car, and consider what to do about it.

The AI self-driving car is driving along an open highway. A group of motorcyclists gradually come up to where the AI self-driving car is driving along. It’s doing 55 miles per hour. The motorcyclists were doing 80 miles per hour to catch-up with the AI self-driving car. Upon reaching the AI self-driving car, they all slow down to 55 miles per hour. They completely surround the AI self-driving car. What should the AI self-driving car do?

You’ve maybe seen YouTube videos of groups of motorcyclists that have done this to human drivers. Presumably, you would just keep driving and try to avoid a confrontation. Suppose though that the motorcyclists start to slow their speed. The AI self-driving car will presumably need to slow down, or else it will hit the motorcyclists ahead of it, and it cannot change lanes because the motorcyclists are there too. Now what?

If you say that the AI self-driving car should slow down, it then takes us to the next step, imagine that the motorcyclists are going to gradually come to a halt. They could essentially get the AI self-driving car to come to a halt, doing so on an open highway. Is that safe? Would you, the human occupants, inside the AI self-driving car want that to happen? Maybe you feel that the motorcyclists are trying to threaten you, and they are readily using the AI to let it happen.

For my article about robojacking of AI self-driving cars, see:

Here’s another similar kind of scenario.

You are in an AI self-driving car.

Unluckily for you, you’ve wandered into an area that has a riot erupting.

The AI self-driving car has come to a halt, paralyzed, because there are rioters completely surrounding the self-driving car. The rioters bang on the self-driving car and are aiming to get in and harm you. What should the AI self-driving car do?

For more about ethical dilemmas and AI self-driving cars, see my article:

Avoidance Often Not Feasible

Some would say that the AI self-driving car should not allow itself to get into such a situation.

That’s not much of a helpful answer.

Sure, if there’s an obvious situation that you can avoid, it would be handy if the AI could possibly predict a situation and avoid it.

In the case of the school children, it’s reportedly been indicated that the AI developers advised that the AI self-driving car not go into the muddled area to pick-up the children, and instead find a less crowded area to park and wait. Though this seems perhaps sensible, I’d suggest it has downsides, such as maybe causing the children to walk further to get to the car, increasing their chances of getting hit or other calamity occur. Also, notably, it was not a solution devised by the AI, but instead relied upon the AI developers to suggest or devise.

The point being that having a skittish AI self-driving car that has to avoid situations that can lead to paralysis is certainly something to keep in mind, but it doesn’t seem to fully address the problem.

Also, we’d prefer that the AI is able to “reason” about what to do, rather than hoping or betting that the AI developers can find a workaround. In the real-world, the AI self-driving car has to do what a human driver might do, and not necessarily be able to “phone a friend” to get out of a jam.

Coping By Sharing

That being said, it is vital too that whenever AI self-driving cars find themselves in a paralyzing situation, the experience can be shared with other AI self-driving cars.

Most of the automakers and tech firms have setup a cloud-based system to allow for data collection and machine learning for their line of self-driving cars, known as OTA (Over The Air) capabilities. We find that having these particular kinds of experiences shared into the cloud can be handy as a means of getting others of the AI self-driving cars to avoid like situations or at least have some possibilities of what to do when such a circumstance arises.

For my article about OTA, see:

For my article about common sense reasoning and AI self-driving cars, see:

Another form of sharing among AI self-driving cars involves V2V (vehicle to vehicle communications).

This would be handy when an AI self-driving car has discovered a paralyzing situation, and it might forewarn other nearby AI self-driving cars about it. Besides perhaps staying away from the situation so as to avoid getting into a paralyzing predicament, it might also be possible that multiple AI self-driving cars might come to each other’s aid, and find a means to jointly get out of the situation. This could make use of swarm intelligence.

For my article about swarm intelligence and AI self-driving cars, see:

There are other coping strategies for the AI self-driving car. It could potentially interact with the human occupants and maybe jointly identify a means to get out of the paralysis situation. This could be good, or  it could be bad. If the human occupant offers helpful insights, it could be good. If the human occupants say something like run them all down, it could be problematic as a solution to be considered viable by the AI system.

For my article about natural language processing and interacting with human occupants in AI self-driving cars, see:

Overall Aspects To Deal With

In quick recap:

  • Try to avoid paralyzing situations, if feasible
  • Seek to learn from paralyzing situations, doing so via OTA and cloud-based machine learning
  • Be able to recognize when a paralyzing situation is arising
  • Once in a paralysis, be considering ways out of it
  • Keep watch of the clock to gauge how long it is lasting
  • Tendency toward impoliteness or aggressiveness as a possible paralysis buster
  • Reduce the bubble size but simultaneously increase the driving capability
  • Potentially confer with other AI self-driving cars via V2V about such situations
  • Other

Currently, most of the automakers and tech firms aren’t giving much consideration to the paralysis predicament.

They tend to consider this to be an “edge” problem (one that is not at the core of the driving task per se). Many AI developers tell me that if the AI self-driving car has to wait until the school children disperse or the baseball parking lot becomes empty, it’s fine as a driving strategy, and meanwhile the human occupants can be enjoying themselves in the car during the waiting time. I don’t think this is reasonable, and furthermore it ignores the often adverse consequent aspects of having the self-driving car being in the paralyzed state.

For my article about edge problems in AI self-driving cars, see:

It’s time to make sure AI self-driving cars are able to cope with potentially paralyzing situations.


There is a famous saying that often times people fail at a task due to analysis paralysis.

They over-analyze a situation and thus get stuck in doing nothing.

You might claim that when I was in the woods and facing the wolf, I was overthinking things and had analysis paralysis. I don’t believe so. I was doing analysis and had ascertained that no action seemed to be the best course of action, for the moment, and remained alert and ready to take action, when action seemed suitable.

In the case of pseudo-paralysis for AI self-driving cars that I’ve been depicting here, I’m not herein been focusing on instances where the AI self-driving cars get themselves into an analysis infinite loop and suffer analysis paralysis.

Instead the situation itself is causing paralysis, as dictated by the desire to avoid injuring others, and so the need to remain alert and ready for making a move whenever suitable.

That’s the kind of paralysis we can overcome with better AI.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.