By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor
Most of the self-driving car AI capabilities are focused right now on the rules-of-the-road driving practices. Developers are aiming to make sure that a self-driving car will come to a stop at a red light, and that it will abide by the maximum speed limit while going down the road. A self-driving car should stay within its lanes on the freeway. It should react when another car decides to veer into its lane. It is supposed to react if there is suddenly a blown tire laying up ahead and blocking the road. These “reactive” kinds of smarts are crucial for a competent self-driving car.
These reactive practices are the same kinds of tactics that a novice teenage driver first learns when getting behind the wheel. The timid teenager wants to make sure they don’t violate any laws and knows that ultimately, they will need to be reviewed by a Department of Motor Vehicles (DMV) official to showcase that they can properly operate a vehicle. The thing is, being a reactive driver is only half of the picture. You cannot always just react to a given situation. Instead, experienced human drivers know that they need to be on the alert to predict and try to avoid dangerous circumstances. We expect human drivers to be able to drive defensively. A defensive driver is one that is constantly scanning the driving environment and trying to anticipate what might go wrong, and anticipate how to best make maneuvers that will hopefully avoid calamities.
Right now, very little attention is going toward true defensive driving for self-driving cars. This somewhat makes sense in that the AI developers are seeking to at least make a self-driving car do what the neophyte teenage driver can do. Once that foundation is established, the goal is to move up the food chain and get the self-driving car to be more like an experienced driver. That being said, having self-driving cars on the roads that are only novice drivers is like driving your car around a bunch of teenage drivers that are just learning to drive. When you see a teenage driver that is barely able to keep a car moving in traffic, you tend to give them wide berth and pray that they don’t get themselves into a bad situation. The teenage driver might have the best of intentions as they drive, but without sufficient defensive driving techniques they are likely to produce an accident. The same can be said of the existing self-driving cars. They might have the best of intentions, but their apparent lack of defensive driving skills is going to get someone killed.
Let’s use an example of everyday driving to illustrate what I mean by defensive driving. Suppose you are coming up to an intersection and you see that the light is just turning to yellow. You are moving along at say 45 miles per hour and need to decide whether to “go for it” and make the yellow before it goes to red, or whether to come to a halt by braking the car before you enter into the intersection. This happens to you a thousand times a week as you drive around town. Time and again, you are judging whether you should push it and make the light, or whether you should call it quits and come to a stop and wait for the light to go through its next cycle. We generally aren’t even explicitly aware that we make these decisions as it has become a routine cognitive game that we have learned by years of driving.
Some human drivers are relatively stupid and simply gauge whether they can make the light or not, and often do not think through the consequences of their decision. Other drivers are more reflective and think about the consequences. For example, if there is a car behind you, you might be thinking about what action they might take related to the yellow light. Suppose you decide to hit your brakes, but the driver behind you doesn’t and they are wanting to gun it and make the light, you’ll end-up with the car behind you plowing into your car. Or, suppose that there is a car ahead of you, and you are the one that is trying to decide what to do and also anticipate what the car ahead of you is going to do. If you were to make a decision to give the car some added gas to accelerate into the intersection, but if the car ahead of you suddenly slammed on its brakes, you’d be plowing into them.
There are other considerations too. You probably would scan the other lanes that are currently at a red and that are eagerly awaiting the light to go green for them, which will happen when the light goes red to you. Suppose there is a car edging forward and you can see that they are going to try and rush into the intersection the split second that the light goes green for them. You might calculate that it is possible that while you go into the yellow and it turns red halfway while in the intersection that the other car might spring forward and ram into you. There might also be pedestrians that are wanting to cross the intersection, or bicyclists, or motorcyclists. All of these are other “actors” in this driving environment and you need to ascertain what they might do.
A good defensive driver mentally is calculating these factors. The decision to run the yellow or not do so is one that involves measuring the risks of doing so. As a defensive driver, you know that others can take actions that might put you into further risk, especially if you are taking a risk already. These are aspects that a novice teenage driver is not even close to considering. They are simply looking at the yellow light and in a very narrow bandwidth of mental computation trying to decide whether to brake or accelerate to make the light. They don’t consider the car ahead of them, the car behind them, the cars waiting to cross from the other direction, the pedestrians, the bicyclists, etc. It is too much for them to consider these aspects all at once. And, in many cases, they haven’t even figured out that they should be considering those other factors. Their knowledge of driving is limited to the rules-of-the-road and reactive driving. They have little or no defensive driving capabilities as yet.
Self-driving cars are currently in the same boat, so to speak. They aren’t looking at a range of factors and doing proper defensive analyses. In the case of the yellow light, self-driving cars are just like that teenage novice driver. The self-driving car uses its camera to detect the intersection light as yellow. The radar of the self-driving car gauges the distance to the interaction and the distance of the intersection. Using the existing speed of the car, the system calculates whether at its present speed it can reach the intersection and cross through it. If it is going to be tight, the self-driving car AI tries to figure out whether speeding up is possible. Can the car accelerate fast enough, given the distance and however fast the car can speed-up, and make it into the intersection and through it before the red light? If these calculations show that it is not feasible, then the option “obviously” involves braking the car. It is assumed that cars behind the self-driving car will “of course” brake likewise.
In fact, if the humans driving behind the self-driving car did not brake when the self-driving car opted to brake, most AI developers would decry that the stupid human drivers were at fault. The developers would point out that the AI self-driving car made the “right” decision to not run a red light. Those dolt human drivers did not react fast enough to brake and that’s their fault. Down with humans! If we only had all self-driving cars then the other self-driving cars behind the car that stopped would have all come to a halt too. Problem solved. Just make sure all cars are self-driving cars.
I’ve heard this argument so many times and each time it makes me shake my head in exasperation. These AI developers are living in a dream world. Their dream world of all self-driving cars is many years away, likely decades away in time. If they are wanting to have self-driving cars on the roads now, they need to give up on the futuristic utopia and be programming the self-driving car to cope with the realities of being on the road with human drivers. Human drivers are going to be at times dolts, and that’s just a fact. It is up to the self-driving car to practice defensive driving tactics. It needs to judge whether the car behind it is likely to stop if it decides to stop. Has the human driven car been riding on the bumper of the self-driving car? Has it shown aggressive driving? These are characteristics that we look for as defensive drivers. Self-driving cars need to do the same. Otherwise, they will be as bad as the teenage novice drivers and produce lots of accidents that could have been avoided.
I’ll give you another example of defensive driving, and one that I am sure you’ve experienced multiple times. Each morning as I am driving to work on the freeway, I drive past a major exit of the freeway that leads to the LAX airport. The exit ramp is miles long and purposely so. Human drivers trying to get to the airport are all scrambling on that exit and often in a panic because they are late for their flight. The volume of cars is tremendous. Eventually, they back-up onto the freeway as there are so many cars trying to get down that exit ramp. Have you seen this kind of driving situation before? Assume so.
Here’s where the defensive practices come to play. With the “slow lane” of the freeway now getting clogged by the backed-up cars trying to get down the exit ramp, the other lanes of traffic are meanwhile flowing at 55 to 70 miles per hour. You have cars sitting nearly motionless in the rightmost lane, and immediately next to those cars in the lane next to them are vehicles zipping along at a superfast speed. This is an accident in the waiting. What tends to happen is that a car stuck in the lane trying to exit has a human driver that is upset about the long stationary wait, and so they suddenly try to jump out of the stalled lane and enter into the lane with the zooming traffic.
You then have a car that is going 3 miles per hour that is intersecting with cars that are going 70 miles per hour. These “idiots” often don’t consider the fact that they are going to disrupt those fast moving cars. You’ll then see the fast moving cars either having to severely brake to avoid hitting the interloper, or they swerve into a lane to the left. But, swerving into a lane to their left causes the cars in that lane to also then be disrupted. Those cars then either start to brake or they swerve too. It is a morning dance that I see each day. It is a dance that typically leads to at least a fender bender at some point, or worse it sometimes leads to horrid accidents and injury or death to those involved in the dance.
If we had a self-driving car that was driving in the lane to the left of the exit lane, and if the self-driving car is not practicing defensive driving tactics, it might be going along in the lane at the maximum speed limit without a car in the world. It “sees” that its lane is open and freely able to proceed. It is not anticipating that a car from the bogged down lane might jump in. As such, when a car jumps into its lane, the move will be unexpected. The self-driving car now has nearly no or limited reaction time. It might not be able to swerve without directly leading to an accident. It might opt to brake but then have cars behind it that ram into the self-driving car.
Had the self-driving car been smart enough to do defensive driving, it might have recognized that there were these stalled cars and determined the chances of a car that might suddenly jump into its lane. It might be already scanning the driving environment and try to find what is the safest path away from any such car that jumps in. The self-driving car might already be tapping its brakes lightly, giving a signal to the car behind it to be watchful that the self-driving car might be needing to brake soon. The self-driving car might decide to proactively move into a lane further to the left and avoid the chances of having a car that jumps into the lane that it was in. These are all prudent defensive driving approaches. Few AI developers are encoding this into self-driving cars currently. Via machine learning, the AI developers are assuming that the self-driving car will figure this out (see my recent column on Machine Learning for self-driving cars and the limitations thereof).
The defensive driving that we all take for granted is something that we are partially taught when learning to drive, but it is also learned by the experience of driving. In the case of the LAX exit ramp, if you drove past that several times on successive mornings, you’d gradually see the pattern of cars that come to a halt and then a random number of them that pop over into the next lane to try and extricate themselves from the stalled lane. We cannot just hope that self-driving cars will someday figure this out. We instead need to explicitly make sure that AI developer are fostering defensive driving into self-driving cars, and then in combination make use of machine learning so that the self-driving cars will improve as they drive. We cannot allow self-driving cars onto our roadways that are driving at a novice teenage level and pretend that their lack of defensive driving skills is ok simply because they have a rudimentary capability to navigate the roads. Instead, we have every right and expectation that the self-driving cars are able to drive defensively and avoid accidents or at least try to do so. The art of defensive driving is crucial to all our well-being. May the force be with self-driving cars.
This content is original to AI Trends.