Ten Human-Driving Foibles and Self-Driving Car Deep Learning Counter-Tactics


By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

When trying to teach a self-driving car to be on the defense (see my column about defensive driving for self-driving cars), it is helpful to provide insightful supervised guidance when the AI is using deep learning, aiding it while the system is figuring out how to deal with roadway traffic.  If you feed lots and lots of driving and traffic data into a machine learning algorithm blindly, without any supervision (i.e., unsupervised), it may or may not spot key trends that we already know do exist. Rather than leaving the machine learning algorithm to its own ends and pray that it finds useful patterns, it is appropriate to nudge it toward aspects that will help the automation to ultimately drive a car well and nimbly as it ambles along real-world traveled roads.

Having done an analysis of human drivers and their driving foibles, we can use that analysis to point the deep learning in the appropriate direction. I mention this because there are some self-driving car makers that are pretending that a self-driving car does not have to contend with human drivers that are also on the roadways.  These head-in-the-sand developers are envisioning a world in which all cars on the road are only self-driving cars. In this pretend world, the self-driving cars are all polite and civil to each other. They communicate with each other and allow the other car to know where they are going and ensure that no two cars will butt heads. You go, says one car to the other, no you go, says that car in return. What a wonderful world of automation that cooperates with other automation. By banning humans from driving cars, this dream land is one of a utopian car driving nirvana.

Wake-up and smell the roses! This vision is a crock. It will take decades upon decades to get the hundreds of millions of existing cars to eventually become self-driving cars. We are going to have a mix of human driven cars and self-driving cars for the foreseeable future. Some even doubt that we will ever have an all self-driving car environment and that humans will demand that they retain the right to drive a car. That being said, there is the view too that those human drivers will ultimately be overridden by the AI of the self-driving car, when needed, reversing today’s roles of the human driver overriding the AI of the self-driving car. In essence, the future is one that allows humans to drive but that the self-driving car knows they aren’t that good and so it will take over from them when it chooses to do so.  Big brother right there in your own car.

Anyway, facing today’s reality of having self-driving cars mixing with human driven cars, we need to ensure that the self-driving car is savvy about human drivers.  There is the famous case involving a self-driving car that came up to a four-way stop and came to a proper halt, it then wanted to move ahead, but it saw that a human driven car was also stopped across the intersection.  Even though the self-driving car arrived a few seconds before the human driven car, and therefore strictly speaking had the right of way, the human driven car did a classic “rolling stop” (not ever coming to a full stop), and the self-driving car therefore decided to let the human driver go first. Turns out that another human driver did this at the same stop sign, and one after another other human drivers did so, repeatedly, while the self-driving car sat there not moving because it was programmed to let those other cars proceed until it was the self-driving car’s turn to go.  The self-driving car was playing by civility rules, while the human drivers were gaming the self-driving car.  This is the same as if the self-driving car had been a teenager learning to drive, meaning that the more experienced drivers would have taken advantage of the timid teenage driver in a likewise fashion.

What does this tell us? We need to make sure that self-driving cars are wise to the tricks of human drivers. Human drivers have evolved a myriad of cutting-the-corners approaches to driving. Many of these tricks are not particularly legal.  On the other hand, they are not so illegal that they can get the human driver readily picked-up and arrested for unsafe driving.  Humans have found ways to stretch the boundaries of unsafe driving so that it nears to safe driving, but at the same time when exposed to the glare of proper driving practices it is clearly a lousy and potentially illegal way to drive.

In our self-driving development AI lab, we’ve been studying how human drivers drive. We then are using large collections of driving data and combining it with guidance to help get the deep learning algorithms to discern patterns involving human driving foibles.  We don’t need to have the deep learning algorithm start from scratch and wildly look for human driving foibles, which if so would consume tremendous amounts of processing time and might not even find what we know it should be finding anyway. So, we give the deep learning a supervised guidance to get it into the right frame of mind, so to speak. This would be equivalent to sitting with a teenage driver that is first learning to drive, and then pointing out around them the other drivers and the trickery they are doing.  You can then have the teenage driver realize how to make sense of a morass of sensory data coming at them, and devise counter-tactics to deal with the human driver foibles.

Some of my AI colleagues have warned me that my self-driving car system might not only learn counter-tactics (which is desired and what I hope to have it learn), but also learn to make use of these human foibles in its own driving.  This would seem at first glance to be an adverse consequence of the deep learning. To some degree, though, I am actually seeking for this to happen.  Allow me to clarify. I am not aiming to have a self-driving car that drives in an unsafe manner, and nor one that drives illegally. At the same time, you cannot expect a self-driving car that mixes with human drivers to always act in some puritan manner. Let’s use the four-way stop example.

At the four-way stop, the self-driving car came to a proper and full stop. It then watched for the other cars to do the same. The other cars were aggressive and inched forward. The inching forward triggered the self-driving car to remain standing still. The self-driving car remained standing still, and one by one the human driven cars did the inching trick.  We all do this to each other.  The self-driving car could have opted to inch forward, after having made the legal and complete stop, and thusly challenged the other human driven cars. By challenging the other human driven cars, some of those human driven cars would likely have stopped, since they would perceive that the other car is edging ahead. It’s a daily game of chicken out there on the roads and byways of the world.  A savvy self-driving car needs to know how to play the game of chicken. No more mister nice guy, it’s time for the self-driving car to grow-up and put on some boxing gloves.

I include here ten of my favorite human driving foibles. There are more. This is a representative sampling that is illustrative of what it takes to make a self-driving car be savvy and not be a teenage timid and naïve driver.

The Slow Poke.

This is a human driver that moves along at speeds that hamper traffic flow. They are often going well below the speed limit. It is as though they cannot seem to find the accelerator pedal. Though not an illegal act, it can be unsafe and cause other traffic to excessively try to get around the slow poke, creating ultimately an unsafe driving situation.  This could be ticketed as it is an act that creates a road hazard for traffic. Counter-tactic:  The deep learning is guided toward discovering that by detecting ahead of time that there is a slow poke, it is best to switch lanes if possible, prior to coming upon the slow poke. If the self-driving car gets caught directly behind the slow poke, it is now stuck in the slow poke procession and trying to get out will be riskier than if it had avoided getting jammed behind it to be begin with.

The Crazy Lane Changer

This is the human driver that can’t seem to find a lane they are willing to stick with. They recklessly jump into one lane, then over into the next lane, then back into the lane they were in. This is often fruitless because of slow traffic that is bumper-to-bumper, but the crazy lane changer appears to be brainless and somehow thinks that rapid lane changes is going to get them faster progress. Not illegal per se, but it creates an unsafe traffic condition that can certainly be ticketed.  Counter-tactic:  Stay out of the way of the lane changer as they are likely to cut off the self-driving car and leave little or no room for safety. Anticipate which lane they are going into next, and take defensive actions accordingly.

The Cut-You-Off

This is the human driver that opts to push into your lane and do so with just inches to spare in front of you. They cut-you-off. Often, they do this without regard to the safety of others. Sometimes they don’t even realize what they have done and are oblivious to other traffic. Counter-tactic: The AI needs to be observant and see if the cut-you-off is doing this ahead of the self-driving car and nearly hitting other cars. If so, the self-driving car needs to be extra careful when nearing the cut-you-off and be ready to slow down, and even be applying the brakes to show brake lights to the cars behind the self-driving car, warning them about a potential sudden stop or slow down.

The No Brake Lights

This is the human driver that either has brake lights that don’t work or that opts to not use their brakes (or, uses the parking brake to slow down, which is an old trick used when speeding past a cop and wanting to slow without being obvious of it). Counter-tactic:  Self-driving cars are mainly using various distance sensors to detect what the cars ahead are doing.  Some also use the camera to read the brake lights of the cars ahead.  If using the brake lights as part of the sensor fusion, realize that the brake lights alone are not a sufficient indicator of what the cars ahead will do.

The Lane Straddler

This is the human driver that wants to be in more than one lane at a time, and not simply due to changing lanes. Instead, the human driver straddles two lanes and ends-up blocking both lanes. This is often done on highways when the human driver cannot readily see what the traffic ahead is like because the lane they are in has a big truck blocking their view. They then straddle the other lane, trying to figure out which lane is best to be in.  Foolish and unsafe. Counter-tactic:  The self-driving car should be cautious when going past the lane straddler, doing so in a lane that the straddler might suddenly decide to occupy. As the self-driving car comes up upon the lane straddler, the self-driving car should be looking for any sideways motions that might suggest the lane straddler is going to veer over into the lane of the self-driving car (and thus the self-driving car needs to adjust accordingly).

The Stutter Stopper

This is the human driver that speeds up, then slows down, then speeds up, then slows down. Maybe they are listening to music on their car radio that gets them to do this. Maybe they do this by some kind of stupid habit. It is annoying and can be unsafe as it confuses other traffic.  Counter-tactic:  The self-driving car should try to detect the pattern of a stutter stopper and then accordingly plan to start and stop too if behind it, or switch lanes to go around it when feasible to do so.

The Generous

This is the human driver that lets all other drivers cut them off. They are generous to a fault. Though this might seem like a safe way to drive, it actually creates confusion since most other human drivers don’t expect it. Then, when they see the generous driver be generous, they insist on getting the same generosity, and if not so provided they then go into reactive driving modes.  Counter-tactic:  The self-driving car needs to detect the generous driver and either scoot around them or leverage the generosity and make a speed-up or other maneuver to get in front of them.

The Illegal Turner

This is the human driver that seems to believe they can start a right turn from the leftmost lane, or do a left turn from the rightmost lane. They get themselves into a tizzy because the turn they wanted to make is coming due, but they planned poorly for it and make a radical maneuver to make the turn. Counter-tactic:  The self-driving car can detect these illegal turners as they usually start to block traffic by slowing way down and then edge into the next lane.  By early spotting the behavior, the self-driving car can either give way to let the illegal turner do their thing, or get into a position that prevents the illegal turner from taking action.

The Close Follower

This is the human driver that hugs the car ahead of them. They are within inches of the other car. This allows for insufficient stopping distance. If the car ahead suddenly slams on their brakes, the close follower will end-up in the back seat of the car ahead. Counter-tactic:  The self-driving car can usually detect the car follower when it is behind the self-driving car. By making the detection, the self-driving car needs to then purposely drive in a fashion to forewarn the follower about what is going on up ahead. This is an attempt to reduce the car follower surprise if the self-driving car needs to suddenly brake. Another tactic involves moving over into the next lane to let the car follower go past and then be following a car ahead, rather than be following closely the self-driving car.

The Erratic

This is the human driver that appears to be driving drunk. Don’t know that the driver is aiming at a DUI, but the way they drive sure seems like it.  They speed-up, they slow down, for no apparent reasons, they wiggle around in their lane, they straddle lanes, they don’t use their signal when changing lanes, they don’t stop at the light and make otherwise erratic moves.  Counter-tactic:  The self-driving car needs to detect the erratic driver and then give them wide room. Staying back of the erratic driver is sometimes wise, but only safe if giving them lots of distance ahead.  Trying to get ahead of the erratic driver might work, but the erratic driver might speed-up and close the gap.  The tactic of staying back or moving up ahead depends on the specific erratic nature.  Best of all would be to choose a different route and get off the roadway of where the erratic driver is.

The above ten types of human foibles when driving are illustrative of what human drivers do. Their behavior is at times unsafe and can be outright illegal. But this is the way of humans. The ten types were depicted in a fashion that implies one human driver doing the human foible. You can easily have more than one human driven car that is pulling the same stunts at the same time.

In other words, the self-driving car has to anticipate that an erratic driver might be up to their right, and meanwhile a close follower is coming up behind the self-driving car. Furthermore, a lane straddler might be a few car lengths up ahead to the left, and a cut-you-off is rapidly coming from the traffic behind the self-driving car.  The AI needs to handle simultaneous instances of each of the various human foible drivers.

This becomes a complicated game of chess. There are several chess boards at the same time, and moves being made by different players.  The AI needs to consider the next moves for each of them, and how their moves and the counter-moves of the self-driving car will play out.  One human driver can make use of multiple types of bad driving behavior. Multiple human drivers can do so too, in isolation. In addition, multiple human drivers can react to each other, sparking them each to do more of the human foibles.

Today’s game playing AI systems are focused on one player at a time kind of strategies. Driving a car involves multiple players and a multitude of strategies. It is also a game of life and death, since whatever the AI decides to do when driving a car can have quite serious consequences. Unlike playing a game of poker or chess, the wrong move can send the self-driving car directly into the path of an erratic human driver and force the two to collide. Waving your hands about the fact that maybe the erratic driver should be considered at fault does little to compensate for someone that gets injured or killed in such a collision. Self-driving cars are the mecca of AI game playing and for those of you that love developing deep learning for game playing, you ought to come over to the self-driving car realm and play the most complicated and serious game there is. Driving a car. It’s not so easy.

This content is original to AI Trends.