By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor
In the Los Angeles area, we get about 10,000 earthquakes each year. Ten thousand earthquakes! Yes, at first glance it seems like a tremendous number of earthquakes, and you would assume that we are continually having to grab hold of desks and chairs to be able to withstand the shaking. Not so. It turns out that very few of the earthquakes are of such a severity that we even are aware that an earthquake has occurred.
Charles Francis Richter provided in the 1930’s a handy scale that assigns a magnitude number to quantify how much power an earthquake has. His now popular Richter-scale is logarithmic and starts essentially at zero (no earthquake), and then indicates that a 1.0 to 1.9 would be a micro-earthquake that is not felt or rarely felt, while a 2.0 to 2.9 measured quake would be one that is slightly felt by humans but that causes no damage to buildings and other structures, a 3.0 to 3.9 is often felt but rarely causes damage, a 4.0 to 4.9 is felt by most people via noticeable shaking and causes none to some minimal damage, a 5.0 to 5.9 is felt widely and has slight damage, etc. The Richter-scale has at the top of the magnitude scale a value of 9.0 or greater, and any such earthquake would tend to cause near total destruction in the area that it hits.
For Southern California, we get several hundred quakes that are around a 3.0 each year, and just a few larger ones annually such as about a dozen that are in the 4.0 range. When we get the “big” ones in a heavily populated area, perhaps in the high 4’s, that’s when you tend to hear about it on the news. The scale is not a linear scale and so keep in mind that a 4.0 is actually significantly higher in power than a 3.0. A linear scale is one that you could say that for each increase in the quake magnitude that the amount of impact would be relatively proportional for the increase. In contrast, the Richter-scale, since it is logarithmic, you should think of the increase as taking many jumps upward for each time that you boost the number, i.e., a 4.0 is many jumps upward from a 3.0, and a 5.0 is even more jumps upward from a 4.0. Another way to think of this is to imagine that say a 4.0 is to a 3.0 is like the number 40 is to the number 3 (more than ten times it), while a 5.0 to the number 3 would be like the number 500 to the number 3 (more than one hundred times it).
Why would you care about the Richter scale? I want to tell you about the ways in which we can measure the capabilities of a self-driving car, of which there is a popular scale used to do so, and in some ways it is analogous to the Richter scale. The self-driving car capabilities scale was developed by the Society for Automotive Engineers (SAE) and has been variously adopted by other entities and international and national governmental bodies including the U.S. Department of Transportation (DOT) and its National Highway Traffic Safety Administration (NHTSA). The latest SAE standard is known as the “J3016” which was originally released in October 2014 and then had an update in September 2016. The formal title for the standard is: “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.”
Let’s consider the nature of this SAE-provided self-driving car scale and its significance.
The scale ranges from 0 to 5, and is typically characterized as this:
Level 0: No automation
Level 1: Driver assistance
Level 2: Partial automation
Level 3: Conditional automation
Level 4: High automation
Level 5: Full automation
We don’t assign intervening units within a level, and so it is always referred to as for example a Level 2 but not as a Level 2.1 or Level 2.6, instead it is just a level 2. This is therefore unlike how the Richter scale works.
With this SAE self-driving car scale, the levels are simple integers of 0, 1, 2, 3, 4, 5. The lowest value, the 0, means no automation, and the highest value, the 5, means the fullest automation. The values in-between, namely the 1, 2, 3, 4, are used to indicate increasingly added capabilities of automation. In that sense, the Richter scale is alike since as the numbers go up, the impact or significance goes up too. The SAE scale caps out at the value of 5, the topmost value.
The Richter scale is pretty much easily measured in that we can use seismographs to measure the ground shaking and then be able to state what the quake was in terms of magnitude. Unfortunately, similarly differentiating a self-driving car is not so readily determined. The criteria offered by the SAE allows us to somewhat decide whether a self-driving car is at a particular level, but there is not enough specificity and too much ambiguity that it is not as sure a thing that we can tag a particular self-driving car as absolutely being at a particular level. SAE emphasizes that their definitions are descriptive and not intended to be prescriptive. Also, they are aimed to be technical rather than legal (we will ultimately have lawsuits about whether a self-driving car is a particular level, mark my words!).
We cannot say for sure that a given self-driving car is exactly a specific level per se. Judgment comes to play. It depends upon what capabilities the self-driving car seems to have. We would need to closely inspect the self-driving car to ascertain whether it has the features needed to be classified as to a certain level. The features might be full-on and we could all generally agree that the self-driving car has the capability, or we might disagree about whether the features are entirely there or not, and if partially there then we might debate whether the self-driving car merits being classified as to the particular level.
Here’s a bit more detail at each level:
Level 0: No automation, human driver required to operate at all times, human driver in full control
Level 1: Driver assistance, automation adds layer of safety and comfort in very function-specific manner, human driver required for all critical functions, human driver in control
Level 2: Partial automation, automation does some autonomous functions of two or more tasks, such as adaptive cruise control and automated lane changing, human driver in control
Level 3: Conditional automation, automation undertakes various safety-critical driving functions in particular roadway conditions, human driver in partial control
Level 4: High automation, automation performs all aspects of the dynamic driving task but only in defined use cases and under certain circumstances such as say snow or foul weather gives control back to human, human driver in partial control
Level 5: Full automation, automation performs all aspects of the dynamic driving task in all roadway and environmental conditions, no human driver required or needed.
There is slipperiness in the levels 1, 2, 3, and 4, and so we will see self-driving car makers that will claim their self-driving car is at one of those levels and we’ll need to collectively debate whether they are accurately depicting the capabilities of the self-driving car. A level 0 is relatively apparent and does not require much debate since it is a car that has no self-driving capabilities whatsoever. A level 5 is also relatively apparent (well, somewhat, as I discuss later on herein), since it is a self-driving car that can do anything a human driven car can do.
Whenever I hear anyone talking about self-driving cars, they often get muddled because they fail to differentiate what level of a self-driving car they are referring to. This is akin to referring to earthquakes but not also mentioning the magnitude. If I say to you that I endured an earthquake last week, what do you think that I mean? Did I experience a 9.0 that is utter destruction? Did I experience a 4.0 that is a somewhat hard shake with usually minimal damage? Or was it a 1.0 that I likely did not even feel and I am exaggerating about what happened? You don’t know what I am referring to until I tell you the magnitude of the quake as per using the Richter scale. The same is the case about self-driving cars. If I tell you that I was taken around town by a self-driving car, you would be wise to ask me what level of self-driving car it was.
I was at an Autonomous Vehicle event a few weeks ago, and there were some fellow speakers arguing vehemently about the present and future of self-driving cars. One was saying that we have self-driving cars today, while the other one was saying that we are years away from having self-driving cars. Who was right and who was wrong? Well, it depends upon what you mean by the phrase “self-driving cars.” If you are allowing that a self-driving car is anything measured in the SAE levels of 0 to 5, then you could say that we do already have self-driving cars because we certainly have cars that are at the levels 0, 1, and 2. On the other hand, if you consider the only true self-driving car to be a level 5, then you would be correct in saying that we don’t have any self-driving cars today since we don’t yet have a level 5 self-driving car.
When talking with people that aren’t involved in the self-driving car industry, I have found they are apt to refer to a self-driving car and be ambiguous about what they mean. Even most regulators and legislators are the same way. I usually try to make them aware that there is a scale, the SAE scale, and then inform them about it. Otherwise, without using some kind of scale like SAE’s, you can have enormous confusion and nearly religious debates about belief in self-driving cars and doubt about self-driving cars, all because you aren’t referring to the same things. A level 5 is completely different than a level 2, and so arguing blindly about “self-driving cars” is unproductive and exasperating until you state what level of self-driving car you mean.
One aspect that is sometimes used to make it easier to understand the levels of self-driving cars involves mentioning these three factors:
– Eyes on the road
- Hands on the wheel
- Foot on the pedals
At the higher levels of self-driving cars, you presumably can temporarily take your eyes off the road, you can temporarily take your hands off the wheel, and you can temporarily take your foot off the accelerator and brake pedals. Up until level 5, the human driver though is still considered the true driver of the car. Thus, even if you opt to temporarily take your eyes, hands, and feet off of the control of the car, in the end it is you the human that is still responsible for driving the car. I have exhorted in many of my columns that this is really a dangerous situation since the automation that suddenly hands control back to the human can catch the human unawares, and the ability for the human to react in time to save themselves from a deadly crash is measured often in split seconds and not sufficient for the human to properly take back control of the car. Also, humans get lazy and do not consider this temporary aspect of putting their eyes, hands, and feet afield of the controls as something that is “temporary” and will often start to read a book or otherwise become wholly disengaged from the driving of the car (leading to great danger).
A level 5 self-driving car is presumably one of the crispest of definitions since it indicates that a car must be able to be driven by the automation in all situations without the use of a human driver. Unlike level 4, which says that if the roadway or environmental conditions are especially harsh that the automation can give up and hand control over to the human, the level 5 requires that no human driver be needed at any time for any reason whatsoever. This is the ultimate in self-driving cars. We aren’t there yet. We aren’t even close, in my opinion. Achieving a level 5 self-driving car is the nirvana and something that is very, very, very, very hard to do.
This aspect of the level 5 being so hard to achieve is part of my basis for making a comparison to the Richter scale. Going from level 0 to level 1 is a significant jump, and so you might liken it to a logarithmic step up. Going from a 1 to 2, or a 2 to 3, or 3 to 4, those are sizable steps too, though it might be argued they are not logarithmic in scale. Going from a 4 to 5, it can be argued is logarithmic. This is due to the aspect that completely eliminating the need for any human driver is a really big step. A level 4 car might be pretty darned good, and you might say that well it just cannot do driving in snow or in a severe storm, but to me, until you have gotten a car to be driven fully by automation in all circumstances, it ain’t a true self-driving car.
Google has been aiming at the level 5 and knows that it is one of those moonshot kind of initiatives. They eliminated any controls within the car, in order to make a bold statement that the human driver is not only not needed to drive the car, but that the human driver cannot drive the car even if they want to drive the car (since there aren’t any controls to use). Many of the self-driving car makers are hopeful of eventually getting to a level 5 car, but for now, they are developing self-driving cars that are within the levels 2 to 4 range. Meanwhile, they have futuristic concept cars that show what the look-and-feel of a level 5 car might be in the future, but these concept cars are hollow and just something used to showcase design aspects.
Keep in mind that a self-driving car maker can skip levels if they want to do so. Some self-driving car makers are progressing from one level to the next, trying to achieve a level 2 before they get to a level 3, and achieve a level 3 before they get to a level 4, etc. There is no requirement they do it this way. You can skip a level if you like. Furthermore, your self-driving car might have some features of a lower level and other features of a higher level, and so it is a mixture and not readily categorized into just a particular level. As mentioned earlier, there is judgment involved in deciding whether a self-driving car has earned its claimed level. Ford has announced they are skipping level 3 and going straight to level 4, aiming to do so by the year 2021. Some self-driving car makers are predicting they will have a level 4/5 by the year 2019, but I am dubious whenever I see someone saying that they will be a dual level consisting of specifically levels 4 and 5, because as stated herein that a level 5 is a different beast and you either can do a level 5 or you cannot.
Indeed, we are likely to have “false” claims about a self-driving car in terms of the level it has achieved. I put the word false into quotes because a self-driving car maker might genuinely believe or want to believe that they have achieved a level, even though others might argue that the self-driving car has not achieved that level. The word false might suggest someone trying to be sneaky or nefarious, which could certainly happen, but it could also be done due to ambiguity of the definitions. Today, for example, most would agree that the Tesla self-driving cars are at a level 2. But, some claim that Tesla’s self-driving cars are at 3. We can pretty much argue about this until the cows come home, and it is for me not much of an argument worth undertaking. We know and all agree that today’s Tesla is not a 4 and not a 5, which therefore means it is quite a bit below what we envision a true self-driving car to be.
I don’t want to seem like I am denigrating anything less than a 4. I do believe that we are pretty much going to be evolving self-driving cars from one level to the next. It makes sense to do things that way. If you are trying to bring self-driving cars to the market, you would typically bring any evolved features to the market as soon as you think you can. On the other hand, if you are doing as Google has been doing, which is more of a moonshot research project, you might not feel the need and nor the pressure to get the self-driving car into the market and thus will just keep pushing until you can get to a level 5. We have though seen Google changing its posture on this, and perhaps realizing that getting into the market with their self-driving cars sooner rather than once they later on get to a level 5 might be a prudent thing to do.
For a level 5 self-driving car, some argue that the level 5 must not have any controls inside the car that would allow a human to drive a car. In other words, there isn’t a steering wheel and there aren’t pedals. There is no apparent physical means to allow a human to drive. The concept cars show that the humans are partying it up as passengers and there is no driver. The interior might have swiveling seats and the passengers can face each other, with no need to be looking forward and peering out the front windshield. The self-driving car is doing all the driving and so the interior compartment is just like a limo with no need for the passengers to care about the driving of the car.
This argument about the controls is open to ongoing debate. Suppose we did put controls inside the car, does it imply that the human driver is needed? Some say that no such implication is inferred. They say that humans might want to drive the car, and so they should be given the option to do so, if they wish to do so. By providing the normal steering wheel and pedals, it gives the human that option. The automation could still be one that is able to always drive the car, and there is never a need for a human to use those controls. Perhaps for nostalgia sake, a human might want to drive the car, or maybe they are a car buff and just enjoy driving.
The counter-argument is that if you put controls into a level 5 self-driving car then you are asking for trouble. The human driver might opt to take over the controls from the automation, but maybe the human is drunk, or maybe the human hasn’t driven in years and is rusty in terms of driving, or maybe they take the controls over at the wrong moment just as the automation is doing a delicate maneuver. For those reasons, some say that a level 5 should never have any controls for a human driver.
There are also some that assert that maybe we go ahead and allow a human driver to drive if they choose to do so (not because they must), but they won’t use conventional physical steering wheel and pedals to do so, and instead the human might use their voice to drive the car or use their smart phone or a touch screen to drive the car. Meanwhile, the self-driving car “utopia” people suggest that if you allow humans to drive in a level 5 car that you are going to mess-up the future when all cars are being self-driven by automation. Via automation, all cars will be able to communicate via automation and synchronize with each other in this utopian vision, while if you allow even one human to be a driver in a level 5 car then you will mess-up that utopia.
One of the current falsehoods, I assert, involves the claims that the self-driving cars are “safer” as you make your way up the levels. In other words, it is suggested that a level 4 self-driving car is safer than a level 3 self-driving car, and a level 3 is safer than a level 2. I think this is debatable. You need to keep in mind that all of the levels other than 5 will still have the human driver involved. Even if the automation is more sophisticated, you still have the human driver in the equation. Maybe you might claim that if the human driver is doing less as the levels get higher, the portion of the driving they aren’t doing is getting safer, and so overall the stats will show that the safety has been increased. This is an argument that we’ll need to see if it bears out. Also, even a level 5 cannot be seen has utterly safe per se, which I have covered in some of my other columns about self-driving cars and safety.
One technological aspect that is of fascination today is whether we know what kind of technology is needed to achieve a level 5 self-driving car. My column on LIDAR discussed that some believe you must have LIDAR to get to level 5, while others believe you won’t need LIDAR to get there. Tesla claims that the hardware they have on their latest cars, consisting of 8 cameras and 12 ultrasonic sensors, and some other sensory devices and processors, will be sufficient for level 5. Don’t know if this will be the case.
With the rapid advances in sensors and in processors, it could be that the hardware Tesla has today either will be insufficient to get to level 5, or might hold them back from getting to level 5. Given that they seem to be somewhat anchored to their hardware (entrenched due to investment), they might also see other more nimble self-driving car makers that adopt more modernized hardware as time evolves, and Tesla might be “stuck” with the older hardware that at one time seemed extremely state-of-the-art. We’ve seen companies do this many times in other industries, wherein they put a stake in the ground about the hardware, they get jammed up because of this, and others swoosh past them by adopting new hardware instead.
Another factor to consider about self-driving cars and their levels is whether you are referring to a pilot or prototype car, versus a self-driving car that actually is working on the public roadways. If I have a laboratory with an acre sized obstacle course and I have my self-driving car drive it, and I claim it is a level 5 self-driving car, does that really constitute a level 5? I would argue that it does not. To me, a true level 5 is a self-driving car that can handle any situation that a human driver can drive, meaning driving in the suburbs, in the inner city, in the open road, and so on. A prototype that is able to make its way around an artificial driving course is not much proof in my book.
I would also suggest that we need the equivalent of a Turing test for self-driving cars at the level 5. Those of you into AI know that the Turing test consists of ascertaining whether you can differentiate the behavior of a system between what the AI does and what a human can do. In essence, if the system can do whatever a human can do, and if you can’t ferret out that it is AI, you could then indicate that the AI is exhibiting artificial intelligence of the equivalence of human intelligence. This also means that you need to have a sophisticated human for comparison, because if you use a human that is not sophisticated you are then making a false comparison.
Likewise, for a self-driving car at the level 5, we are indicating that the automation must be able to drive in any situation that a human can. How far do we stretch this? A normal human driver is unlikely to be able to drive a car in extreme circumstances, such as on a race track at high speeds. Does the level 5 car need to be able to do that, or is it only required to do normal driving. There are human drivers that are inept at driving on ice. Does this exempt the level 5 car from being forced to show that it can drive on ice, since “humans” cannot do it either (or, at least some humans cannot). The nature and definition of human driver is itself ambiguous and so it leaves more room for interpretation about level 5 self-driving cars. I am prepared to propose a Turing test equivalent, and if anyone wants to then call it the Eliot test for self-driving cars, I’d be honored.
In any case, now you have an appreciation for what it means to be referring to a car as a self-driving car, and let’s all be working toward the vaunted level 5.
This content is original to AI Trends.