By Lance Eliot, the AI Trends Insider
There were two human chess masters playing a chess game the other day. It was a timed game. Each had no more than five minutes to make their move on the chessboard. After making a move, the player making the move would press down on the top of a small clock-timer on the table to signal that they had made their move. You’ve probably seen this in various chess tournaments.
At the start of the chess game, they were each slapping the timer as fast as they could get their hands on top of it. The initial moves in a chess game are often lightning fast since the opening maneuvers are all pretty much well studied and practiced. This also tends to happen towards the end game portion of a chess match too, wherein usually there are so few pieces left that the moves are relatively obvious and so the players are quick to make their moves.
Mid-game of a chess match the moves tend to slow down. This is due to being in muddied territory. The chess players opt to study very carefully each move and must contend with the combinatorial explosion involved. By this, I mean that if the player moves a piece to a particular square on the chessboard, they try to think ahead as to the counter move by their opponent, and then their counter move to the counter move, and so on. In chess it is referred to as levels of ply. You can try to think ahead one ply, which is not very hard to do, or try to think ahead a multitude of ply, which can be very mentally taxing.
In this particular chess match that I was watching, the chess players were in deep concentration at the mid-game. Sometimes, the player would reach out as though ready to make a move, and then returned their arm and hand back to their lap or side. Sometimes, the player would put their fingers onto the chess piece, and act as though their fingers were glued to the piece. The player would look at the piece and the surrounding chess squares, and then finally after what seemed like a long time of focus, would then move their piece, and then gently slap the clock-timer. It was obvious that the two players were evenly matched and each was playing at their top form. A battle royale, as it were.
I was transfixed by their play and was watching the chessboard and them at the same time. They would at times lean forward, lean back, brush their hands through their hair, put their hands on the top of their heads as though they would explode, and so on. And then the unthinkable happened. One of the players was seemingly concentrating deeply and apparently lost track of the time (even though there was a timer within inches of their reach). Believe it or not, the time ran out on the player. The player, upon realizing what had just happened, appeared to be crestfallen. He had lost. He tried to argue that it was not a fair loss since he had not actually lost by having been checked or pinned by a checkmate, and nor had he conceded the game to the other player.
The rules were the rules, and he was politely informed that he had lost the chess match. He continued to protest. Do you think he was right that it was “unfair” that he lost the match simply due to running out his clock? Some attendees felt disappointed and wanted to see the two titan players really go at each other and they thought that a mere lack of moving in time seemed untoward to them as a reason to have the match end. Others pointed out that he knew it was timed, and yet he opted to let the time lapse. It was his own fault. Shame on him. No excuses. In fact, if the timer is not really going to be observed as part of the rules, then why even have a timer there at all. Just tell the players they can take as much time as they please. It could become a multi-hours game, or maybe multi-days, multi-weeks, or even multi-decades game.
However you feel about the chess game and timer, one thing is seemingly clear, the chess player froze-up. He had a human freezing problem. This is when you seemingly fail to do something that was otherwise expected to occur, and you “freeze” in place. Maybe you’ve seen baseball players that freeze-up and fail to swing the bat at a pitch. Or, maybe you’ve had a moment when you were out hiking and saw a bear, and you froze in position, not sure what to do. It is somewhat common for humans to occasionally freeze when they are expected to take some action or be in motion.
Freezing up can have adverse consequences. We’ve just seen that the chess player lost the game due to freezing. A baseball player might have taken a strike ball that otherwise could have been hit out of the ballpark. That bear that you saw might come charging at you and had you been nimbler you might have gotten away from its claws. There are some occasions whereby freezing might be good, such as again with the bear and suppose by freezing the bear considered you not a threat or even failed to notice you. But, overall, I think it’s fair to say that most of the time a freeze is probably not a good course of action.
People can freeze due to fright. The sight of a bear might be enough to cause your brain to go haywire and you become frozen in fear.
People can freeze due to having their mind go blank. When you see the bear, it might be that you have no prior experience of how to handle seeing a bear, and so your mind has nothing at the ready to tell your body what to do.
People can freeze even though their minds are apparently quite active. In the case of the chess player, he was so absorbed into the chess match that he just lost track of time. Presumably, his mind wasn’t blank. Presumably, he wasn’t frozen in fear about what move to make. Instead, he was calculating mentally all sorts of moves and counter-moves, and it preoccupied his mind so much that he lost focus on another matter, namely the importance of the timer. He might not have been reflective enough about his own behavior to have caught himself in the act of being totally focused on the chess move.
People can sometimes get confused about time and thus appear to have become frozen. Suppose the chess player thought he had another 30 seconds to go. When the timer went off, it did so sooner than he had expected. He appeared to be frozen, when in fact his mind was calculating the chess moves, and let’s say he was counting time too, but misunderstood or was mistaken about the amount of time he was allotted.
Another possibility is that you cannot make up your mind about something and so you intentionally let the clock run out. Suppose the chess player was considering moving the pawn or the queen. He kept going back-and-forth mentally trying to decide which to move. He was so caught up in this internal mental debate that he could not decide. He might have given up at that point and figured he’d just let the clock tick down, or maybe it got to the end of the allotted time and he decided he couldn’t make a decision so let fate decide. Of course, in this case, fate was already pre-determined in that the rule was that if you fail to move in time then you have lost the match.
Yet another possibility is becoming overwhelmed mentally and either misjudging time or misunderstanding time. Some chess players have a difficult time concentrating on the chess game and so insist that no one be allowed to cough or sneeze or make any noises during the chess match. Suppose that a chess player is playing chess, meanwhile there are people nearby making noise, meanwhile let’s say that the chess player is worried about what he’s going to eat for dinner, meanwhile the other chess player is staring at him and so he is trying to stare back. Etc. His mind could be so filled-up with all of this happening that he then forgets about the timer or misjudges it.
For computer people, we might even sometimes say that a person mentally got into an infinite loop. In trying to decide between moving the pawn and the queen, maybe the chess player kept looping mentally over and over. Move the pawn. No, move the queen. No, move the pawn. No, move the queen. On and on this goes. It is similar to a computer program that gets itself caught into a loop that won’t stop, notably known as an infinite loop.
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software and also caution fellow tech firms and auto makers to make sure that their AI for self-driving cars does not suffer a freeze.
The Freezing Robot Problem
The kind of freeze we are referring to is commonly called the Freezing Robot Problem (FRP).
Imagine that an AI self-driving car is driving down the street. Up ahead is a street sign that kind of looks like a stop sign. It is bent and mauled. You’ve maybe seen these before — it looks like some hooligan decided to hang on the stop sign and bend it into a pretzel. Now, it could be that it isn’t really a stop sign at all, and it just is some kind of art piece that resembles a stop sign. Or, maybe it is indeed a legally valid stop sign and that in spite of the now bent out-of-shape structure it is really truly a stop which must be obeyed.
In this murky aspect of trying to ascertain if the stop sign is true or not, it is possible to encounter the Freezing Robot Problem. Here’s how.
Let’s assume that the self-driving car is approaching the sign and has ten seconds until it reaches where the sign is posted. We now have a time limit – a decision needs to be made within no later than 10 seconds. If the decision occurs at 15 seconds, it’s too late and the self-driving car would likely have already driven past the sign, having merely continued forward since there was no other command from the AI otherwise. If it really is a stop sign, the self-driving car has then broken the law, and possibly endangered other cars and pedestrians.
Some AI self-driving car pundits claim that a self-driving car will never break the law, for which I say hogwash, and suggest you take a look at my article about illegal driving by AI self-driving cars: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/
Even if the decision can be reached in the ten seconds, let’s pretend that for the car to be brought to a halt would require at least 2 seconds. In that sense, making a decision to stop at ten seconds is too late. The real-world time available is only about 8 seconds since there needs to be the time to come to a stop, if that’s the right thing to do.
For further details about the inner workings of AI self-driving cars and cognition timing, see my popular piece: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
We now have the case of an AI self-driving car which is confronted with trying to determine if there is a stop sign ahead, and the AI has about 8 seconds to do so. There are potential adverse consequences if the AI decides to not stop the car and if the stop sign is truly a stop sign. Of course, you might be thinking, well, just bring the self-driving car to a halt anyway, as a precaution, even if it’s not a stop sign, but if you ponder this for a moment I think you’ll realize that suddenly coming to a halt can have equally undesirable consequences. The car behind you might ram into you. Etc.
For AI self-driving cars, they tend to have five key stages when processing the world around them and taking action. Those five stages are:
- Sensor Data Collection
- Sensor Fusion
- Virtual World Model Updating
- AI Action Plan Updating
- Car Controls Commands Issuance
For further details about my AI self-driving car framework and the five stages, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
The self-driving car that is trying to figure out the situation with the stop sign has to first contend with detecting the stop sign. This would primarily involve the visual sensors such as the cameras on the self-driving car. The images collected by the cameras would be examined by the AI, often using a machine learning model to do so. This could be an artificial neural network that has been trained on thousands and thousands of images of street signs. The images of this particular suspected street sign would be fed into the neural network. The neural network might indicate that it is or is not a stop sign, or might provide an indication with some amount of probability attached to it. In essence, there’s say a 65% chance that it is a stop sign. This though needs to be balanced against the aspect that this implies there’s a 35% chance it is not a stop sign.
In theory, the sensor data collection should occur and after having done an assessment in this case of the image, the next stage should take place of the sensor fusion. Let’s suppose though that for some reason, the sensor data collection stage won’t give up to the sensor fusion stage.
Perhaps the sensor data collection and analysis stage starts to process the image of the stop sign, and the neural network gets bogged down trying to figure out what the image consists of. Meanwhile, the clock is ticking. After let’s say 10 seconds occur, the neural network finishes its work and reports that it is a stop sign and then releases into the next stage, the sensor fusion stage. Guess what? It’s too late. The AI self-driving car has now proceeded to drive past the sign. Not a good thing.
This is similar to the human freezing problem. The chess player for whatever reason did not finish making his move in the time allotted. In this case, the sensor data collection and analysis did the same thing. It did not finish its efforts in time. This we’ll refer to as a Robot Freezing Problem. Bad stuff indeed.
We can reuse the same reasons of why the human freezing problem occurs, and recast it into the Robot Freezing Problem. Essentially, the same conditions that lead to a human freezing can be generally ascribed to the AI freezing. I don’t want to overly push the analogy since I want to make clear that today’s AI is not the same as the human mind. Today’s AI is much cruder, by far. I am just saying that we can use the human freezing problem to inform us about the nature of the Robot Freezing Problem.
First, it could be that the AI is not aware of the time constraint and so it fails to abide by the need to get something done in time. In our view, the AI must be always watching the clock. It must always be estimating how much time is allowed. It must be “self-aware” as to how much time it is using, including even when it is estimating the amount of time that can be used.
Next, the AI can freeze due to becoming too absorbed in a matter at hand. In the case of the stop sign, the sensor data collection and analysis became overly absorbed and used up the available time. The overarching AI system needs to tell each component how much time it can use, and then must monitor the component. When a component goes beyond its allotted time, the overarching AI has to have some contingencies of what to do, especially if the result coming from the component is now empty or only half-baked.
In terms of people freezing when their minds are blank, the analogy to the Freezing Robot Problem would be a circumstance where suppose the neural network receives an image it has never seen before. There is nothing within to help decide what the sign is. In a blank posture, there should be some kind of contingency of what the stage will do. If it reverts to some other more exhaustive means to try and analyze the image, this could be another wrong move since it might use up more time.
For the circumstance of potential confusion leading to freezing up in humans, imagine if the sensor data analysis conveyed to the sensor fusion that the image is muddled and cannot be ascertained. Meanwhile, suppose the sensor fusion is designed in such a manner that if the sensor data analysis is incomplete, the sensor fusion loops back to the sensor data collection stage and tells it to try harder or try again. This could end-up in confusion and the time runs out.
This same example could also be likened to the infinite loop of choosing between the pawn and the queen on the chessboard. The sensor fusion loops to the sensor data analysis. The sensor data analysis pushes to the sensor fusion and offers the same analysis it had done originally. These two get caught in looping back-and-forth with each other. The third stage, the virtual world model updates stage, just keeps waiting for those two other stage to decide what’s going on.
The AI system can also become potentially overwhelmed. Suppose besides trying to decide about the stop sign, meanwhile a pedestrian darts out into the street. And, another car that’s behind the self-driving car is menacing the self-driving car by getting overly close to it. And, the internal engine sensors are reporting that the engine is overheating. And, one of the computer processors used by the AI has just failed and the AI needs to shift over to a different processor. A lot of things can be happening in real-time simultaneously. This could potentially lead to the AI “freezing up” and not performing some needed action on a timely basis.
There is another possible kind of freeze that could happen with the AI system that is a little bit different than for humans. With most humans, if the human gets frozen, you can usually snap them out of it. I suppose there are circumstances wherein a human goes into a coma, and in that case you might say they have a more definitive and long lasting freeze. Anyway, I’m sure you’ve had your PC freeze up on you and it won’t do anything at all.
Blue Screen of Death Scary for a Self-Driving Car
Imagine that an AI self-driving car system gets itself into a frozen state akin to having the Windows blue screen of death. That’s an especially scary proposition for a self-driving car. At least with the other types of Robot Freezing there is a chance that the AI quickly overcomes the freeze, but in the case of a more catastrophic freeze, the question arises as to how soon the AI system can do a reboot or otherwise overcome the freezing.
Furthermore, even if the AI can do some kind of reboot of itself, pretend that the AI self-driving car has been barreling along on the freeway at 70 miles per hour. If the AI system goes “unconscious” for even a few seconds, it can have devastating consequences. This is why the more careful auto makers and tech firms are building into their AI systems a lot of redundancy and resiliency. The act of driving a car is something that has life or death consequences. The AI system needs to be able to work in real-time and deal with whatever bad things might come at it, which also includes the notion that within itself it gets somehow mired or frozen.
See my article about adaptive resiliency and AI self-driving cars: https://aitrends.com/ai-insider/self-adapting-resiliency-for-ai-self-driving-cars/
The other aspect to consider is that if the Robot Freezing Problem does occur, what provision does the AI have to proceed once it gets out of the frozen mode. It’s like when your PC crashes on you and when you bring it up after doing a reboot, Word shows you a recovered document that you were working on. The AI of the self-driving car has to know what the latest status was before the freeze, and rapidly figure out what has happened since then. This kind of catch-up needs to happen in real-time and while at the same time presumably still be properly controlling the car.
One approach involves having a core part of the AI system that is supposedly always on and nearly impossible to have go under. No matter how badly the rest of the AI gets, the core part is intended to still be in action. This could allow the self-driving car to then be slowed down and moved over to the side of the road, or take some other emergency action to try and safely get the self-driving car out of any driving situation that could lead to dire results.
Currently, the efforts involving the AI core are often not extensively tested by the auto maker or tech firm. Right now, the goal is to get the AI self-driving car to perform as a driver of the car. Keep the car in the lanes. Make sure to stop at stop signs. And so on. They pretty much assume that when the AI core is invoked, it will work properly to put the self-driving car into a safer place. It often is not tried in a wide variety of circumstances and just assumed that no matter where the car is when it is invoked, it will work as intended. Until we get more AI self-driving cars on the roadways, and have circumstances of the AI core getting invoked, we might not know how well these AI cores will really function when needed.
Take a look at the dangers of software neglect and AI self-driving cars: https://aitrends.com/selfdrivingcars/software-neglect-will-impede-ai-self-driving-cars/
Some key principles for good AI self-driving car systems is that they must be continually aware of time. They must be watching the clock and determining whether anything that is running has gone amok or has perhaps overlooked its allotted time or otherwise is not dancing to the needs of the overall operation and safety of the self-driving car. The clock must get top priority. This though also means that the clock watching cannot itself become a problem. Sometimes you can have a racing condition wherein all that happens in a real-time system is getting clock interrupts and as a result actions get delayed or confounded.
There are going to be situations of a no-win nature that the AI system must deal with. There’s the famous trolley problem, consisting of having to decide between whether the self-driving car should run into a tree to avoid hitting a child in the street, but hitting the tree might kill the occupants of the car. If the self-driving car does not swerve to hit the tree, it might kill the child standing in the street. The AI components involved in trying to make this kind of ethical decision could take too much time and by default run into the child, or might get into a tussle with each other internally and jam the whole AI system from functioning. This can’t be allowed to happen. Presumably. Hopefully.
See my article about the ethical dilemmas confronting AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
See also my article about responsibility and AI self-driving cars: https://aitrends.com/ai-insider/responsibility-and-ai-self-driving-cars/
One other aspect that needs to be considered is the human driver in a self-driving car that’s less than a level 5. A level 5 self-driving car does not have a human driver and takes on the driving task, while for self-driving cars at less than a level 5 the self-driving car essentially co-shares the driving with the human, though it is considered that the human driver is ultimately responsible for the self-driving car. Suppose the AI encounters the Robot Freezing Problem – will the human driver even know that it has happened? Will the AI have sufficient core capability to alert the human driver? Will the human driver have enough time available to take over the controls of the self-driving car?
Imagine that you are out camping with a good friend of yours. You are both hiking together in the woods. All of a sudden, your friend freezes up. You don’t know why. You can see that he’s frozen. But, you don’t know why and you don’t know how long he will stay in that frozen state. You start looking around, suspecting that maybe a bear is nearby and your friend has seen it. There could though be lots of other reasons why your friend is frozen. As the human driver in an AI self-driving car, you could be in the same circumstance. You don’t know why the AI has frozen, and you don’t know how long it will last. This is ultimately going to happen and we don’t yet know what the end result will be. Once we have a large volume of self-driving cars on the roadways, ones that are at the less than level 5, we’re likely to encounter situations that get the AI into the Robot Freeze Problem. Let’s aim to be ready for it, beforehand.
Copyright 2018 Dr. Lance Eliot
This content is originally published on AI Trends.