By Dr. Lance B. Eliot, AI Insider for AI Trends
Here in California, we have carpool lanes, which are also known as diamond lanes (marked with a diamond symbol), and officially called High-Occupancy Vehicle (HOV) lanes. These special lanes are intended to alleviate traffic congestion by encouraging two or more occupiers in a car. This helps to maximize the people-carrying capacity of our roadways. If you’ve been to Southern California during a recent visit, you’ve likely experienced the non-stop bumper-to-bumper freeway traffic that we seem to have year-round now, in spite of also having these HOV lanes. It can be maddening and exasperating to drive across town and take two hours to go a mere 20 miles in distance.
One interesting aspect of the carpool lanes involves their design. Most of the carpool lanes are at the leftmost positional lane of the freeway, generally where you would normally expect a so-called fast lane to be. The carpool lanes used to be wide open such that any car could wander into and out of the carpool lane, doing so at any juncture. Studies showed that this was actually making traffic worse and leading to a heightened volume of car accidents, and that a controlled entry point and exit point for the use of carpool lanes made more sense. Drivers were wildly entering and exiting from the carpool lanes, ramming into each other and often moving from a fast rate of speed into the lane next to it that was proceeding at a much slower clip or faster clip. Imagine going 70 mph and suddenly merging to your right into a lane moving at 15 mph. It was a recipe for disaster.
The cost to potentially erect physical barriers along miles and miles of carpool lanes was quite high, and so instead the idea of painting double-yellow lines onto the freeway surface was used (indicative of do not cross it), and was coupled with occasional dashed or broken white lines (meaning Okay to cross). Thus, in California, it is against the law to cross a solid double-yellow line, therefore for that portion of the carpool lane that has the solid double-yellow line you cannot legally enter into or exit from the carpool lane. You must only enter or exit whenever there is a broken white line. These broken or dashed white lines painted onto the freeway are staggered here and there, allowing exit or entry every few miles or whenever the thought was that drivers might be needing to get out to say reach the Disneyland exit or some other popular freeway exit or entrance.
As you can guess, not everyone is willing to abide by these painted lines. Legal scofflaws will often jump into and out of the carpool lane, arbitrarily crossing that solid double-yellow line, trying to quicken their way to work or get to that happy hour bar they are aiming for after work. Crossing the carpool lanes illegally is generally a violation of the California driving code CVC 21655.8 and if you are caught doing so then you are likely to have to pay a hefty fine of several hundred dollars and be given a point on your driving record by the Department of Motor Vehicles (DMV) denoting that you incurred a moving violation (as you accumulate points you endanger losing your privilege to drive on California public roads).
It is clearly stated in California that it is illegal to cross the solid double-yellow lines of a carpool lane. I am a very law abiding citizen and always obey the rules of the carpool lane. I angrily glare at those that illegally use the carpool lanes and I anxiously look around in hopes that a highway patrol officer might pull them over. Besides giving them a ticket, I somehow also dreamily hope that maybe they might get a bit roughed up too. I say this in jest, of course, but I do think that these drivers are creating undue risk for all of us on the freeways and that it is not just that they themselves are taking risks, but that they are also putting all the rest of us at higher risk too. They are inconsiderate and selfish, and so whatever punishment can be applied is good with me. Dare I say, the death penalty?
The other day, I was driving along, solo, and cruising along in the regular lanes of traffic. Up ahead, two cars had bashed into each other and were sitting in the middle of the freeway. No first responders had yet arrived and this was a freshly minted car accident. The cars behind this traffic roadblock were trying to flow around the blockage, akin to a stream of water trying to flow around a large boulder sitting in the middle of the river. I was going to need to choose whether to flow to the left of the wreck, or flow to the right of the wreck. The right side was full of cars all now being squeezed into one lane of space, and it was pure pandemonium of drivers jockeying for position. Flowing to the left would have been much easier. The catch was that on the left side was the carpool lane, and at a juncture of the solid double-yellow lines. Yes, cars were now illegally going into the carpool lane, including I am betting many drivers that would never think of violating the golden rule of don’t cross the carpool lane when it is a solid double-yellow. They were all subject to getting tickets. There was no question that they were violating the legal rule.
Why have I dragged you through the somewhat arcane aspects of carpool lanes in California? The reason is that I want to bring up an important topic about self-driving cars and AI. For some proponents of self-driving cars, they rave about the aspect that the beauty of self-driving cars is that they won’t drive illegally. All those crazy human drivers that violate the rules of the road will someday be in a self-driving car that will presumably obey all traffic laws. The idea of traffic tickets will disappear. We won’t even remember the time during which there was a need to have traffic cops and issue traffic tickets. The court system will dispense with the traffic courts and we’ll all live a happy ticket-free life. What a wonderful future!
But, I say that this utopian view is out of touch with reality. Let’s take a look at why this concept of ticket-free and always lawful self-driving cars is a rather simplistic and frankly ill-informed idea.
I’d like to revisit the carpool lane circumstance that I just described. The cars were flowing around the accident scene and some of the cars went into the carpool lane illegally to do so. Were they wrong to flow into the carpool lane? Well, it depends upon your perspective. There is a “necessity defense” that one can use in court to claim that an illegal driving act was necessary due to roadway conditions. For example, someone has their wife in their car and she is pregnant and about to deliver her baby, so the husband drives above the posted speed limit to get to the hospital in time. The driver broke the law by driving above the posted speed limit. A judge can toss out the ticket if it is believed that the driver violated the law for a bona fide reason and did so with sufficient care and lack of recklessness. This is pretty much a case-to-case type of defense, though prior precedent comes to play too.
If you are in a self-driving car, what will the AI do in these kinds of circumstances? Some want the AI system to rigorously and always obey the law. Period. No exceptions. This is certainly the easier way to write the AI system. By avoiding having to consider anything other than the stated rules of the road, it is much easier to program the system. Basically, input the entire driving manual and rules of the locale you are in, and as an AI programmer you can call it a day. This though seems lacking in realism and ultimately people are going to be upset that their self-driving car is a “mindless robot” that cannot discern when to consider exceptions to the rules.
Consider the case of the cars flowing around the freeway snarl. Would your self-driving car be willing to flow into the carpool lane, crossing the solid double-yellow lines? If programmed to strictly obey the law, it would not do so. It would either sit on the freeway and remain standing behind the traffic snarl and wait for the wreckage to be cleared, or might veer to the right side of the wreckage. But, suppose the right side consisted of an area that was actually not a lane and was the edge of the roadway. Suppose further that it was illegal to drive on that far edge. Now what? Again, would the self-driving car opt to just sit on the freeway and wait to see what happens? Suppose that the freeway is occupied by only self-driving cars and no human driven cars, then this would bring the entire freeway to a potential cascading halt. It could be a domino effect that leads to miles and miles of road stoppage, all because the “mindless” self-driving cars are strictly abiding by the law.
Some will counterargue that this kind of predicament is rare and only occurs once in a blue moon. I disagree. I vehemently disagree. This is not an alleged “edge problem” (which programmers consider a rare circumstance in requirements and thus can be generally ignored or shuffled into a let’s get to it later on bin). This idea of being flexible regarding the laws is actually very common. It happens every day and by millions of drivers everywhere. Thinking of this as a rarity is like looking at driving through rose colored glasses. Driving is messy. It is not like a video game or some idealized world.
If you buy into my claim that self-driving cars will need to be willing to drive illegally, the next aspect is when and under what conditions will they do so. We don’t want the AI to just do so whenever it darned well pleases. We need to still enforce the laws so as to ensure that by-and-large we don’t end-up with chaos and carnage on the roads. The AI needs to consider the options available when driving, including gauging whether an illegal act is warranted. Ultimately, the AI system will be responsible, or someone that made it will be responsible, or maybe we might even decide that the passengers are responsible (though this has great controversy associated with it). At some point, the choices made to drive illegally by an AI-based self-driving car will need to be reviewed and ascertained as to whether it was a good choice or not.
Being ultra-futuristic, you could suggest that maybe the self-driving car AI system could ask a special AI-based real-time tribunal whether the self-driving car can make an illegal driving move in a particular instance, and so in real-time get pre-approved to take such an act. Again, this is a somewhat utopian view and unlikely to be enacted within any reasonably foreseeable future right now. You could also suggest that due to machine learning, and if self-driving cars are sharing their experiences, the impetus to take one illegal act could be based on having “learned” about other times that self-driving cars performed an illegal act. This though is fraught with other difficulties, including whether the situation that your self-driving car currently faces is truly akin to whatever was “learned” via machine learning. If the facts of your situation differ, even in a small way, the learned response from others might not be valid and your self-driving car could make a bad choice because of it.
Consider too the consequences associated with a self-driving car making illegal driving moves. One possible consequence is that the self-driving car maybe will create an accident and cause harm by performing the illegal act. If the self-driving car opts to go into the carpool lane illegally, as described by my example earlier about the freeway blockage, and suppose that it does so without realizing that a car that was legally in the carpool lane cannot slowdown in time and hits the self-driving car, this obviously can produce a bad result. Remember too that we are not discussing a world of all self-driving cars, in that for the foreseeable future we are going to have a mixture of human driven cars and self-driving cars. I say this because if the world had only self-driving cars, in theory they could communicate with each other and try to coordinate their activities. Thus, the self-driving car trying to barge into the carpool lane could have told the approaching self-driving car that is legally in the carpool lane to slow down and let in the illegally acting self-driving car.
I’ll end this column with some added twists to this illegal driving of self-driving cars diatribe. The other day I drove up to an intersection that was controlled by a traffic signal. The light was red to me. I came to a full stop, behind the pedestrian walkway for crossing the intersection. A man in a bright orange vest began to walk through the intersection and he was pushing a rolling device that was painting the pedestrian lines. He got about halfway through the intersection, and then realized he could not complete his effort in the time allotted for the red light. He then held up his hand for me and the other cars that were waiting for the green light. We all took this to mean that we should please wait until he had finished his task.
Sure enough, our light went green, and not a single car moved forward. I was impressed at how his use of his hand had magically gotten all of us to disobey the green light. Cars further behind us were likely confused about why we weren’t moving. I fully expected to hear some horns honking. It was also a bit dangerous too, since cars coming from behind us could see the green light, and might inadvertently plow into the back of our cars.
You need to ask yourself this question: What would a self-driving car do?
The light was clearly green. The self-driving car, if merely “taught” to work legally, would assume it can proceed since the light is green. Even if it knew not to hit the worker that was painting the street, he was now over to the side, and so the self-driving car could have proceeded without harming the worker. But, this also would not have been with the realization that the worker wanted to make his way back across the path, due to his wanting to make two coats of paint on the street surface. Can we expect our self-driving cars to be able to figure out this kind of driving complexity? It would need to have understood the gesture by the worker, it would have needed to understand the aspects of what the worker was doing, it would have had to understand the meaning of the green light and when to go and not, etc.
This is why the true self-driving cars, the Level 5, which is a self-driving car that is driven entirely by the AI and does not need any human driving intervention, those Level 5 cars are a lot harder to achieve than you might at first glance believe. We’ll need to make sure that the self-driving cars know when to drive illegally, and that illegal driving is an option. It cannot be a fait accompli that self-driving cars will always and only drive legally. They need to be that maverick sometimes. But hopefully in a way that causes good and not harm. I’ll want to be there the day that the true Level 5 self-driving car gets its first ticket and its first point against its driving privileges, and then argues that the illegal act was a necessity and gets the ticket tossed and the point removed. That’s the day that the illegally driving self-driving car has grown-up.