Predictive Scenario Modeling for Self-Driving Cars: Seeing the Future

586

By Dr. Lance B. Eliot, the AI Trends Insider

I see the future.

Do you see the future?

Humans seem to have a relatively unique cognitively ability to envision the future. We make predictions about what will happen in the future. Sometimes, the predictions are perhaps relatively obvious, such as if you see a coffee mug teetering on your office desk that you might make a prediction it will fall off the desk, hit the floor, shatter, and the coffee will spill. Pretty much any of us can make that kind of a prediction and “foresee” the future.  Young children at first aren’t very good at such predictions and even the coffee example might be a surprise to them as it seemingly suddenly tumbles off the desk. They learn though about existing events and circumstances and how to extend those events and circumstances into the future.

More complex predictions though begin to stretch out our cognitive abilities. Let’s suppose you are planning for a dinner party. You make arrangements to use a beautiful outdoor venue. The barbeque is setup, burgers and hot dogs purchased, buckets of ice are obtained, party hats are purchased, outdoor lights are obtained, you even buy some outdoor speakers to ensure that music will accompany the event.

So far, you are doing all this as based on a plan. The plan is shaped around a future event. You are desirous of having a dinner party at some point in the future. It is not taking place this instant and instead it is going to occur in the future. Someone exterior to all of this and that is looking at you going to the store to buy food, get ice, and doing all these other aspects might be confused because you don’t seem to be consuming them at the moment you get them. You are hoarding these items. For what purpose?

If you were to thoughtfully review the clues, such as the hot dogs and burgers purchased, the fact that party hats were obtained, and so on, you could likely figure out that this person is planning a dinner party. This seems to be an apt prediction. Do you know with absolute certainty that’s what is going to take place? No, you don’t. Maybe it is a lunch time party, or maybe the person is going to give away these items to someone else and there isn’t a single event taking place. There could be lots of other scenarios about where this is all heading. Generally, I think we would all agree that it though seems like pretty strong odds that the effort is aimed toward a future dinner party.

I mentioned earlier that predictions and seeing the future seem to be relatively unique to humans. There have been some research studies that suggest we are not alone on this planet in terms of being able to see the future. A recent study by Swedish researchers Can Kabadayi and Mathias Osvath provides some fascinating insights into ravens. You might find of interest that the two animals we most believe have some kind of predictive capabilities are apes and ravens. Apes and ravens continue to be popular subjects of various scientific cognition experiments, hoping to determine whether or not those animals really can-do predictions and planning for the future.

Why should anyone care if animals can do this? If there are animals that can do so, it helps us to better understand what takes place when we all are in the midst of making predictions and seeing the future. Besides studying how it is done in terms of behavior, we could also be mapping the brains and try to see if we can ascertain where and how the brain does this. If we can map the brain to find where the brain does this, we have a heightened chance of mimicking the brain via say artificial neural networks to see if we can get the same kind of behavior to arise.

I suppose a more cynical person might say we want to know more about apes and raven’s abilities to predict the future because we are worried about them one day taking over earth from us humans. Maybe it will be called the planet of the apes and ravens.

Anyway, for the recent study on ravens, here’s what the Swedish researchers had to say: “Human planning is often characterized by decisions about future events that will unfold at other locations. The cognitive skill set that allows for planning outside the current sensory context operates across a range of domains, from planning a dinner party to making retirement plans. Such decisions require a host of cognitive skills, including mental representation of a temporally distant event, the ability to outcompete current sensorial input in favor of an unobservable goal, and understanding which current actions lead to the achievement of the delayed goal.”

I would like to highlight some aspects of what they laid out. Predictions about the future and planning for that envisioned future do involve fascinating and crucial cognitive skills. You need to have some kind of mental model about a future event, taking place off in the future, so it is considered to be a time displaced temporally distant event. You need to overcome your existing sensory inputs such as your eyes and your ears, and not process the input coming in at that moment per se, and instead outcompete or focus on unobservable goals rather than immediate goals. We normally expect an animal to simply see or hear what it sees or hears at the moment and take immediate action. A willingness to delay taking action, and anticipate a future event, that is not so easy a task to do.

You also need to line up current actions and do so as part of preparation for that future and currently unobservable goal. As the researches emphasized: “Well-developed self-control is essential to planning because impulsivity keeps one stuck in the immediate context.” If you were to allow yourself to constantly react to immediate stimuli, you would get mired in the present and never be able to prepare for and then cope with the future. If you watch a small child, you’ll see this kind of behavior. Daddy, I want some ice cream. Not now, it will spoil your dinner. I want it now. I want it now! We all know it today as so-called instant gratification. Don’t put off today what you can instantly do, just be in a reactive mode. The problem with always being in a reactive mode is that you are going to likely get walloped by the future and not be ready for it.

For the dinner party, suppose that after all that planning, and after doing the setup, it turns out that on the day of that dinner party the weather turns foul and heavy rains pour down. Yikes, one ruined outdoor party. The prediction of the future envisioned nice weather, but the reality turned out to be ugly weather. Could this also have been predicted? Yes, it could have. Again, you might not have known with a certainty that it would get ugly, but you could have anticipated that it might, and thus, taken additional precautions.

What does this have to do with self-driving cars?

I am glad you asked.

At the Cybernetic Self-Driving Car Institute, we are enhancing the ability of AI to do predictive scenario modeling for self-driving cars. This consists of having the AI create various future scenarios based on current activities and states of the self-driving car, and identify future states that then can allow for the appropriate planning and carrying out of driving journey plans.

The other day, I was on the highway and driving around 65 miles per hour (I might have been going faster, but I refuse to confess here and maybe get a speeding ticket in a Minority Report movie kind of way). Anyway, if you are going at 65 mph, this means you are doing about 95 feet per second. Every second of time, you are traveling nearly 100 feet in distance. The average car length is about 15 feet. This means that if you are allowing let’s say two to three car lengths between you and the car ahead of you, you are giving yourself a cushion of about 30 to 45 feet in case you need to come to a sudden stop. The problem though is that at a pace of going 100 feet per second, and if you include a delay in your reaction time to hit the brakes, which experiments suggest it takes at least 5-7 seconds for you to react, you’ll most likely ram into the car ahead of you if it has jammed on its brakes. Just wanted to let you know.

I was watching the traffic around and trying to be a good driver. As a good driver, you are supposed to be aware of the surrounding traffic and the roadway conditions, doing so at all times. You need to be alert and ready to react. If you are trying to watch a video that is playing on the central console of your car, or trying to tap a text message into your smartphone, you are not being alert and ready to react. I am pointing my accusing finger at some of you. Just saying.

I was behind a car. The car was doing the speed of traffic. I could have just regulated my own driving based on the car ahead of me (see my column  on the pied piper approach to driving for self-driving cars). I like to think more broadly, so I was looking ahead of the car ahead of me. Let’s call the car directly ahead of me as car number 1, or C1. The car ahead of C1 will be number C2. And so on. This will make things easier for me to tell my story.

I was keeping a proper distance between me and C1. C1 was keeping at most one to two car lengths from C2. This is insufficient stopping distance. C2 was keeping almost no distance between it and C3, essentially C2 was riding the bumper of C3. This is a classic driving suicide position. C4 was ahead of C3 by about four car lengths. We’re all going around 65 miles per hour. The scenario right now is that we have me, C1, C2, C3, C4. We are all logically interconnected for a moment in time because we are on the same road, traveling in the same direction, going at roughly the same speed, and acting like a “pack” of cars, even though none of us knows each other and have never met before.

It seemed to me that C1 was focusing solely on C2, and C2 was focusing solely on C3, and C3 was focusing solely on C4. I don’t know this to be a fact. It just seemed that way. I was watching all of them to be wary in case anyone in this chain of cars might sputter or do something untoward. I then noticed that C4, the car that is at the head of this pack, began to make a sudden swerve to the right. I could see their car do this, and it caught my immediate attention. Since it was a few car lengths ahead of C3, I instantly watched to see whether C3 was going to make a similar maneuver or not.

I was mentally making a prediction that perhaps there was some roadway debris up ahead. C4 was the first to encounter the roadway debris, due to being at the head of the pack. Its swerve was an early sign that something was amiss, and the debris prediction seemed plausible. If C3 also swerved, it would tend to confirm the scenario that C4 had swerved due to debris. I could then also predict that C2 would ultimately swerve. I could also predict that C1 would likely swerve. I also wondered whether any of this cascading swerving might produce other adverse consequences. Maybe instead of swerving, a panic by say driver in C2 might have him or her hit their brakes, rather than swerving or in addition to swerving.

The aspect that C4 swerved to the right was another clue. Perhaps the debris, assuming it was a debris-related situation, might be at the far left of the lane. This would likely have the driver of C4 opt to swerve to the right, avoiding the debris. If the debris were in the right side of the lane, it seems likely that the C4 would have swerved to the left, entering into the fast lane, momentarily, in order to avoid the debris. A swerve to the right seemed to suggest that there was debris up ahead, and it was sitting toward the left side of this lane.

Of course, there could be lot of other explanations. Maybe the driver of C4 has a bee in their car and they are trying to swat the bee and so happened to swerve to the right. Maybe the driver of C4 was watching a movie on their central console and just lost control of the car for a moment and happened to swerve to the right. Maybe a fight is going on inside of C4 and a life-or-death struggle for control of the steering wheel is taking place. Maybe I watch too many movies about cars and car chases.

I mentally decided that C4 was swerving to avoid debris and the debris was in the lane ahead toward the left side of the lane. I decided that I would take action before the cascading string of cars ahead of me took their actions. This is a tough call because any of those other cars, the C1, C2, C3, could take evasive action that then gets me into further hot water as I am carrying out my plan of action. I decided that I would instantly switch into the fast lane. I was paying attention and knew that the fast lane was available. I figured that if I got into the fast lane, I would have more room to avoid the debris. I could use the fast lane and the shoulder to the left of the fast lane, if needed, for me to swerve to the left.

Would C1, C2, C3 opt to swerve to the right? If so, my being in the fast lane, on the left of them, would likely make me safer than staying in the slow lane behind them. Any of them could either hit their brakes and I would have possibly slammed into them. Or, any of them could strike the debris and it maybe gets even worse and less avoidable for me, since I am at the end of the pack. By swinging into the fast lane, I hoped that I would avoid the cascading game, I would have more avenues of escape, I would be able to hit my brakes and not have any cars directly ahead of me that I might ram into.

Turns out, all of the above was pretty much correct. It was a remnant of a blown-out tire, sitting in the slow lane, just at the lane markings edge left of the lane. Each of the C1, C2, C3 avoided it by swerving to the right. I avoided it by having gotten into the fast lane and passed it, since it was now to my right.  No one got into an accident. I did think though that some other cars going up to this point might fare a different fate. Hoped that the blown-out tire, and the reactions of other human drivers and cars, would not ultimately produce a deadly accident.

I appreciate that you’ve followed along in my above story about the driving incident. Though it has taken me a little while here to describe the incident, it actually played out in about a handful of seconds of time. Imagine the scenario in your mind. You likely have encountered similar situations. The whole thing happened very quickly. Some say these things are almost like they happened in a dream.  Come and gone.

Today’s self-driving cars are not doing much about this kind of predictive scenario planning. One would say that most of today’s self-driving cars are the “monkey see, monkey do” variety. Whatever is happening directly in front of the self-driving car is the scope of attention of the AI. Does the car immediately ahead swerve or not, does it slow down or speed up, these are the factors used by the AI to decide what action to take for the self-driving car.

Suppose that the driving scenario I just described had happened to a self-driving car. For most self-driving cars today, it would not have particularly noticed that C4 swerved. All the AI would be concentrating on would be that C1 is still driving straight ahead and at the pace of the pack. You might say, well, Lance so what, the AI would have been OK because it would have seen the C1 swerve when it reached the debris and the AI could have commanded the controls of the self-driving car to also swerve. Case closed.

But, I say, suppose the AI did observe C1 swerve, but further it took the AI a few moments to decide to swerve the self-driving car to also avoid the debris. This reaction time might have been so long (a few extra split seconds) that the self-driving car might have hit the debris. We don’t know for sure whether hitting the debris would have been disastrous, but we can assume there is some probability that it could have been. Furthermore, the AI for sure would not have had enough time to switch lanes as I had done. This is because I had predicted the future, and it gave me more time to take evasive action.

That’s why we are working toward predictive scenario modeling for self-driving cars. Our cars should not be driven by AI that is the equivalent of a child. A child that instantly reacts to something is not what we want our self-driving cars to do. We want our self-driving cars to already know how to predict when that coffee mug that is teetering on the office desk that it will fall to the ground and shatter. Some say that the way that we’ll get our AI toward this kind of predictive capability is via deep learning. Deep learning assumes that you have sufficient examples of driving behavior and situations for the AI to find patterns and be able to derive solutions that can then play out in similar scenarios.

That’s one way to do it. Another and complimentary way is to have known templates of scenarios, such as my example before about the debris situation. The AI compares the existing situation to an extensive library of templates, trying to see if the scenario is one that has already been identified. This includes using probabilities because the scenarios in-hand are not necessarily going to be identical to the current situation. Likewise, the solutions, such as my swinging into the fast lane, will apply in some circumstances and not in others.

The scenarios are contextually based. The situation of the open highway differs from what happens when driving in the suburbs. It differs from what happens in the city context. Thus, the AI must have scenarios that are contextually aware. The topmost AI strategic planning component of the self-driving car needs to know the existing context to then select among scenarios and apply probabilities based upon the context (see my column on self-awareness for self-driving cars).[scheduled for 8/4]

Notice that another facet of the predictive scenario modeling involves looking ahead at the moves and counter moves involved in the actions and counter actions of other drivers. In AI that plays chess, we refer to a look ahead as a ply. The more or deeper you look ahead, the more ply you are examining. At the start of a chess game, you are able after the opening move to consider the next 20 positions possible to respond to the move. After 6 moves, it is something like 9,132,484 positions to explore. The overall aspect is that when the AI of the self-driving car is examining scenarios, it also needs to do a look ahead and the deeper ply it can go, the more it will have to judge what action to take. This though takes more processing time, and as I’ve indicated during my example, a car scenario plays out in real-time with often just a few seconds available to examine the scenario, make predictions, and then take action to control the car.

I urge other self-driving car makers to also step-up their AI toward doing predictive scenario modeling. Without this, we are not going to get to a true self-driving car, which is referred to as a Level 5 (see my column on the Richter scale for self-driving cars). Today’s AI “monkey see, monkey do” is barely sufficient to properly and safely operate self-driving cars, since it assumes that if the self-driving car cannot figure out what to do, it merely hands control of the car to the human driver in the car (I’ve explained in my column on the human factors for self-driving cars that this is going to eventually get things into some hot water).

By the way, are you wondering what happened in the experiment with the ravens? The experiment tested the ravens to see if they could make tool-use decisions for an event that was 15 minutes in the future (a short-term prediction), and then another round involving an event that was 17 hours in the future (a somewhat longer-term prediction). The ravens seemed to perform pretty well. According to the researchers, the ravens did even better than 4-year-old children that were used as a comparison to the same tasks that the ravens performed.

The jury is still out about making any firm conclusions of raven intelligence and their predictive capabilities (some have criticized various aspects of the raven experiment).  Hey, maybe instead of trying to build AI that can drive a self-driving car, we instead train ravens to drive cars. We would then have raven-driven cars. That’s a future prediction for you.  Well, not very likely, and I hope I’ve not offended any ravens by saying so (don’t want them to come after me when earth becomes the planet of the ravens).

This content is originally posted on AI Trends.