AI Boundaries and Self-Driving Cars: The Driving Controls Debate


By Lance Eliot, the AI Trends Insider

What’s the deal about the driver controls in AI self-driving cars?

That’s one of the most popular questions I get asked when I am presenting at AI self-driving car events and Autonomous Vehicles (AV) conferences.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and the aspects of driver controls are also of crucial attention to our efforts, along with being notable for the efforts of the auto makers and other tech firms that are developing self-driving cars or so-called driverless or robot cars.

For why I prefer to refer to them as self-driving cars, see my article:

For my overall framework about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

See my article about the ethical dilemmas facing AI self-driving cars:

If you are willing to strap-in and put on your seat belt, I’ll do a whirlwind tour through the nuances of the ongoing debate about driver car controls in AI self-driving cars. It’s quite a story and it has both ups and downs, which might leave you in tears or you might be uplifted. We’ll see.

In essence, the matter deals with whether or not there should be a steering wheel, a brake pedal, and an accelerator pedal — which I’ll henceforth herein refer to collectively as “driver controls,” provided in AI self-driving cars.

When I say the phrase “driver controls” please be aware that I am really referring to “human driver controls” and I’m going to leave out the word “human” for brevity sake. Nonetheless, I trust that you will realize that my utterances of “driver controls” suggests a set of human usable means to steer, brake, and accelerate a car, of which the most common instantiation consists of the conventional steering wheel, brake pedal, and accelerator that we all have come to know and love (well, maybe we like it, or maybe we are just used to those mechanisms as a convenient means of being able to direct the motions of a car, and thus by rote familiarity we have accepted it as appropriate and the right way to drive!).

I’d like to unpack this driver controls topic and will reveal to you that there is more to the matter than perhaps meets the eye.

First, look off into the future.

If you were to take a close gander at the various concept cars of the future, you’ll notice that by-and-large they are depicted as having no apparent driver controls. Indeed, we’ve all grown-up with animated cartoons that show autonomous self-driving cars and they don’t have any conventional car driving controls portrayed. Just as we have been promised jet packs that will allow us to fly around however we want, it seems too that we have been promised there will be cars without any kind of driver control contraptions.

On the surface, this seems to make sense. Why not leave the driving to the AI? It is handy to let someone else, or shall we say something else, deal with the arduous chore of driving a car. Logically, if the AI is going to be doing the driving, we can therefore assume the human is not going to be doing the driving, and therefore we can eliminate the usual driver controls consisting of the steering wheel and pedals. This all seems quite sensible and logical.

It is kind of nifty to be able to potentially get rid of those pesky driver controls. By doing so, you open up new possibilities of a major redesign for the interior of cars. Currently, suppose you had to design a car. Your immediate and obvious constraint is that you need to put towards the front of the interior a set of driver controls. It needs to be at the front of the interior so that the human driver can readily see out of the car and look at the road ahead.

You can potentially place the driver controls on the left side or the right side of the interior, though this too is constrained by the country or region that you are aiming to have your car used in. There needs to be a seat for the driver, thus you have another constraint in that you are now allocating interior space at the front that is either to the right or left and that has to have a seat. I suppose you could decide that the driver will stand-up or be laying down while they drive, but this isn’t particularly acceptable for today’s cars.

So far, in your design of a car for today’s needs, you’ve already been forced to set aside perhaps one-fourth or one-fifth of the interior, doing so for a four-seater sized car. You can then try to play around with the rest of the interior design, but it is going to be hard because you’ve already got that albatross hanging around your neck of the pre-determined space consumed for the driver controls and the seating for the human driver.

I’ve seen some concept cars that have the human driver wearing a Virtual Reality (VR) headset, and thus this presumably allows the car designer to put the driver anywhere in the car. In essence, you could become a true backseat driver by donning a VR headset and having the driver controls placed at the back of the front seats of the car. Frankly, I doubt that we’ll be seeing cars of that ilk. I’d bet that we’ll be experiencing AI self-driving cars before there becomes any outcry to have VR headset driving cars by humans. I’m willing to put $5 on that wager.

Overall, the point being that the use of conventional driving controls has a tendency to dictate what the interior design of a car must consist of. If you could eliminate the driver controls, you now would have an evergreen slate to be able to use the entire interior of the car in whatever way you fancied. Maybe put swiveling passenger seats and let the occupants turn in whatever direction they like. Maybe put beds into the car and let people laydown and catch some shut-eye while in the car. You name it.

For my article about the interior of AI self-driving cars and family trips, see:

For the potential of mass scale motion sickness due to the redesign of cars as a result AI self-driving, see my article:

For an exploration of the internationalizing of car designs and AI self-driving cars, see my article:

Overall, if we could somehow omit the driver controls, it would allow for a re-imagining of the interior of cars. Before we completely jump on board that bandwagon, I’ll mention a somewhat quirky aside on the matter.

Shift your mindset somewhat beyond the topic of cars per se.

Suppose that we are able to someday (soon?) build robots that look like humans and have sufficient AI to perform human-like tasks, including that these robots would have legs, arms, feet, hands, heads, etc. that allows them to undertake such human-like tasks. We see this in science fiction movies quite frequently, though for now I’d like you to think of these real-world futuristic robots as benevolent and not aiming to destroy mankind. It seems that most science fiction portrayals usually have the humanoid-like robots opting to wipe out humanity. I’m not going to tackle that aspect herein and just go along with the assumption that these are robots which are truly for the benefit of us humans.

If we had robots that could walk and talk like us and had AI that was sufficiently skilled such that they could drive a car, we might then be able to keep cars designed as they are today and retain the driver controls. In essence, rather than trying to switch out today’s cars with the upcoming new-fangled AI self-driving cars, we could instead perhaps cruise along with today’s cars and still have them being driven in a “driverless” way (well, a “human driverless” way, via the robots that walk and talk).

I bring this up because today we have around 250+ million conventional cars in the United States alone. Those conventional cars are not going to overnight transform into AI self-driving cars. In fact, it is unlikely that those conventional cars can somehow be retrofitted into becoming AI self-driving cars. Overall, we’ll need to buy new cars, ones that are AI self-driving cars. What then happens to the millions upon millions of older conventional cars?

The odds are that those conventional cars will remain around for a very long time. I mention this because there are some pundits that keep referring to a Utopian world in which there are only AI self-driving cars on our public roads. I doubt this will happen for a very, very, very long time (if at all).

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial idea since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

For my article about the grand convergence that has led us to this moment in time, see:

For potential regulations about AI self-driving cars, see my article:

For the aspects of kits for AI self-driving cars, see my article:

For a potential Frankenstein kind of future regarding AI self-driving cars, see my article:

In any case, the idea is that if we are able to develop walking and talking robots, they presumably could then possibly act as drivers of cars, and since they are shaped the same way we humans are, they could sit in the driver’s seat and work the driver controls for us.

I went laterally on this tangent herein and had warned you that it would be a somewhat quirky angle on this topic.

It is admittedly a bit unnerving to consider getting into your personal car as a front-seat passenger and having your robot chauffeur that gets into the car with you, sitting down at the car controls and asking you where you want to go. You then sit in your passenger seat as the robot drives, and observe it looking forward, turning its robotic head as needed, and moving its robotic arms and legs to manipulate the steering wheel and the pedals. Just hope that the robot driver doesn’t opt to calculate pi or start yammering away on its smartphone and become a distracted driver (ha!). Plus, do you think it knows to click-it or ticket it (i.e., the National Highway Traffic Safety Administration’s national campaign for wearing your seat belt)?

I’ll make another wager and assert that we won’t have those kinds of robots sooner than we will have AI self-driving cars, and therefore there really is not much of an incentive to hold-off working on making AI self-driving cars due to the belief that perhaps we’ll have driving robots instead.

I have some of my former university research colleagues that keep bugging me about the notion that this whole effort to try and make cars that are AI self-driving cars is a misuse of resources and a misguided effort. They emphasize that if we all put the same energy toward making humanoid robots that had AI based driving skills, we really would not need to change anything at all about cars. Leave cars as they are, they contend. Focus on the “real” problem of making robots. Furthermore, they point out that you can use these humanoid-like robots for a lot more efforts to benefit mankind than a “lousy, single focused AI self-driving car” purpose.

Should we all drop our AI self-driving car efforts and switch over to making robots, ones that could drive our cars, along with those robots being able to clean our houses, run our errands, and possibly even go to work for us and do our jobs at the office? I suppose it is tempting. But, I don’t think it is going to dampen anyone’s efforts on the AI self-driving car front.

I believe I’ve sufficiently and “fairly” covered enough about that twist in the plot and will continue unabated about AI self-driving cars that won’t be driven by walking and talking robots.

As discussed, it would be handy if we could remove the cars controls from the interior of a car, freeing up the interior space. But, if we remove the car controls, we’d better be sure of what we are doing. This then takes us into the crucial aspects that there are varying levels of AI self-driving cars.

Driver Control Implications of Six Levels of AI Self-Driving Cars

I’d like to introduce the notion that there are varying six levels of AI self-driving cars, ranging from a Level 0 to a Level 5. The topmost level is considered Level 5. A Level 5 self-driving car is a type of self-driving car that is being driven by the AI and there is no human driver involved when the AI is driving the self-driving car.

For the design of Level 5 self-driving cars, some of the auto makers are even removing the gas pedal, brake pedal, and the steering wheel (the human driver controls), since those such auto makers believe that those contraptions are not only unnecessary but furthermore generally unwise to provide. The assertion is that the Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

I’ll be revisiting this quite important aspect in a moment about Level 5, so be ready to get into the details, thanks.

A Level 4 self-driving car is somewhat akin to a Level 5, but it is more limited in that it will only be able to drive the car in certain pre-designated circumstances. These circumstances are more formally referred to as Operational Design Domains (ODD).

The ODD’s can include such facets as stipulated geographic aspects (similar to doing a geofencing), roadway aspects can be restricted, environmental aspects can be restricted (for example, rain, snow, sleet, etc.), traffic aspects can be restricted (such as only when traffic is open versus crowded), speed aspects can be restricted (a popular restriction is no speeds over 35 miles per hour), temporal restrictions can be specified, and so on.

Any combination of those factors can be established as an ODD. Perhaps an auto maker says that their Level 4 self-driving car will only operate when it is within a 5 mile radius of downtown Los Angeles (a geographic restriction) and only if the traffic is mild (such as not at peak hours of the morning commute and evening commute) and will only go no faster than 35 miles per hour.

Thus, their ODD is this: 5 mile radius + traffic is mild + 35 mph. Some other auto maker might use that same identical ODD, or they might decide to define their own ODD of being a 2 mile radius in any kind of traffic and a max speed allowed of 25 mph.

The auto maker or tech firm that provides a Level 4 self-driving car would presumably let you know beforehand what ODD’s or circumstances that entail the AI self-driving car being able to drive the car. As a potential passenger in such a self-driving car, you would certainly want to know under what circumstances it can and cannot do so. Perhaps, for example, the Level 4 self-driving car can handle sunny weather driving, but it is not able to undertake driving in snowy conditions.

Some industry members are worried that the public might get confused by the aspect that various versions of Level 4 self-driving cars will each have their own respective set of ODD’s, which are to be designated by the auto maker or tech firm.

In other words, some auto maker we’ll call X has come out with a Level 4 self-driving car that can handle self-driving in sunny weather (that’s an ODD they could define), and yet it is not able to self-drive in snowy weather (that’s another ODD they could define). Meanwhile, a different auto maker, call them auto maker Y, they come out with a Level 4 self-driving car and it can drive also in sunny weather but only on highways and not on narrow city streets (that’s another ODD that they could define), and can do self-driving in mild snowy conditions but not severe snowy conditions (that’s yet another ODD they could define).

We would now have two different self-driving cars, one by auto maker X and one by auto maker Y, both considered at a Level 4, each of which has defined a proprietary set of ODD’s under which their respective self-driving car will operate.

Suppose you are standing at the curb and have called for a ridesharing service to provide you with a self-driving car to get you to the store. One of the Level 4 self-driving cars pulls up at the curb to pick you up. Is it going to be able to drive you around or not? Well, if it is the Level 4 self-driving car provided by auto maker X, you’d better not be headed into snowy conditions. If it is the Level 4 self-driving car provided by auto maker Y, you’d better not be headed into narrow city streets.

For my article about ridesharing and AI self-driving cars, see:

For AI self-driving cars operating non-stop 24×7, see my article:

For my article about OTA, see:

It’s like the famous line in Forest Gump about the box of chocolates that you never know what you are going to get. The Level 4 is handy because it allows for essentially a more limited version of a Level 5, and presumably allows auto makers and tech firms to progress from perfecting at the Level 4 to then ultimately arrive at a Level 5. But, this also introduces the confusion and potential danger that any Level 4 self-driving car is going to potentially differ from any other Level 4 self-driving car in terms of the nature and range of ODD’s or circumstances under which the self-driving car can actually be self-driving.

Imagine a slew of Level 4 self-driving cars, some of them by different auto makers and so the ODD’s or allowed driving circumstances differ in that respect. Furthermore, auto maker X comes out with their newest Level 4 self-driving car, an improved version of their older model of their Level 4, and it turns out that the newer model has a new set of ODD’s that differ from the prior model.

I realize that some of you will assert that the auto maker X might presumably update the older model via OTA (Over The Air) electronic patches, but we don’t know that this will always be the case. In other words, it could be that the older model has various limitations in the hardware that no matter what software updates we might provide, it is still stuck at the prior set of ODD’s.

We could end-up then with Level 4 self-driving cars from even the same auto maker that are different in terms of the ODD’s or circumstances under which the respective models of their Level 4 self-driving cars can drive. This adds more confusion to the situation. It would be one thing if you somehow knew or were told that the auto maker X’s Level 4 self-driving cars can only drive here but not there. Thus, when you saw one of those auto maker X branded Level 4’s, you’d know what to expect. It won’t necessarily be that way. Ouch!

For self-driving cars at a level 3 and below, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

We are now ready to focus herein on the driving controls aspects for the various levels of AI self-driving cars.

For the levels 3 and below, there is no question that the driver controls are absolutely needed. The formal SAE definition itself abundantly makes clear that the “fallback” driver must be a human driver. This notion of a fallback refers to the aspect that if the AI decides it can no longer handle the driving task, it can make a request to the present human driver to takeover, plus if the present human driver believes there is a need to take over the driving task from the AI they as the human driver can then choose to do so.

As mentioned earlier, this is a form of co-sharing of the driving task and one that bodes for quite untoward times ahead for the AI self-driving car field. This is also why some auto makers are aiming to skip the Level 3 and instead are aiming solely at the Level 4 and Level 5. In their view, they don’t want to get mired in the co-sharing mess that might well be looming upon us.

For the concerns about human response times to taking over the driving task, see my article:

For the product liability aspects that AI self-driving car makers are going to face, see my article:

For the potential of booming lawsuits related to AI self-driving cars, see my article:

I trust that you agree with me that the driver controls are needed for Levels 3 and below. This then brings us to the matter of Level 4 and Level 5.

For a Level 4 self-driving car, as earlier mentioned, it will have whatever pre-determined ODD’s or circumstances under which it is considered self-driving, as defined by the particular auto maker or tech firm that makes that particular Level 4 self-driving car.

Should a Level 4 self-driving car have human driving controls?

You might at first glance say that it should not. In theory, you should let the AI drive the Level 4 self-driving car. That seems to be the reason to have a Level 4.

But, I ask you this, suppose you are using a Level 4 self-driving car that by the indication of auto maker X cannot be self-driving in snowy conditions. Returning to my earlier example, you have gotten into the Level 4 self-driving car and it is driving you to the store, which is about 15 miles away. During the time that it takes to self-drive you over to the store, snow begins to fall from the sky. Oops!

Now what?

Presumably, according to Level 4, the AI can issue a request for a human driver to intervene and takeover from the AI. If the human driver does not do so, which maybe because there isn’t a human driver present in the self-driving car at that moment, or maybe there is a human driver present but they don’t want to take over the driving task, or maybe there aren’t any humans at all in the self-driving car and the self-driving car is merely on a journey to some other location, etc. If for whatever reason the driving isn’t then taken over by a human driver, the Level 4 will perform a fallback operation and try to place the self-driving car into what is called a minimal risk condition (such as parked at the side of the road).

Let’s assume that you were in the self-driving car and you are a licensed driver, and the Level 4 self-driving car has asked you to intervene because the snow is falling. Suppose you figure that yes, you’ll be happy to go ahead and drive in the light snow, since you grew-up in the Colorado mountains and are quite proficient at driving in the snow.

You inform the AI that you’ll gladly take over the driving controls. Wait a moment, I just said that you’ll take over the driving controls! If that’s the case, and if for Level 4 we allow that they do not necessarily need to have driver controls, how are you going to now take over the driving controls?

Proponents of Level 4 would say that if the auto maker or tech firm had decided to not include driving controls into the Level 4 self-driving car, it’s not a big deal because you as the human driver don’t need to take over the controls. It is not a necessity that you be ready to take over the controls. It is a nice to have. But, it is not mandatory.

As such, even if you were in that Level 4 self-driving car and there weren’t any driver controls, the AI would presumably be aware that you cannot take over the driving, and as such it would perform the fallback to reach a minimal risk condition.

You now are seated in a Level 4 self-driving car that is parked at some edge of a street or highway and you are stranded now because your AI self-driving car has reached the boundaries of what it can do. It cannot in this case handle driving in snowy conditions, and so it has properly brought the self-driving car to a reasoned halt.

Stuck Sitting And Watching the Snow Fall

How do you feel about those apples?

I dare say you might be irked. If you are a licensed driver and ready to drive, and if you wanted to drive when the Level 4 had reached the end of its ODD set and could no longer self-drive, you are out of luck, buddy. You would sit there and watch the snow falling. You would likely be hoping that the snow will soon stop and thus the AI would re-engage and indicate it can continue the driving journey.

I realize that Level 4 proponents that say the Level 4 should not have driving controls would argue that suppose the situation involved a child in the self-driving car and there wasn’t a human driver available. In that case, having the driving controls would be useless anyway. Or, suppose a human driver was available but they were drunk and should not be driving. In that case, perhaps we’d all be thankful there aren’t any driving controls in that Level 4 self-driving car.

And so it goes. Each side to this coin has its merits.

You can try to make the case that there should be driving controls in a Level 4, under the belief that when the AI reaches its boundaries of what it can do, there is then the chance that a human driver that is available in that self-driving car, and presumably properly licensed, and sane, and willing to drive can proceed to do so.

Or, you can make the case that the Level 4 self-driving car should not have any driving controls, which will then ensure that you don’t get some nutty human driver that opts to take over the controls, plus such controls would not be useful for situations wherein there are only children in the self-driving car or no humans in it, and so on.

There are some AI developers that say that the range of ODD’s or circumstances under which a Level 4 will be able to drive the self-driving car will be so wide and deep that it will be a rarity that there would ever need to be a human driver to take over the driving. In essence, if the ODD’s cover driving in sunny weather, rainy weather, snowy weather, and at daytime and nighttime, and when on highways and when on city streets, it would imply that this idea of a human driver being needed is rather farfetched. It just won’t likely occur, they would assert.

For my indication of egocentric AI developers, see my article:

For the idea of singularity and AI self-driving cars, see my article:

For a Turing test for AI self-driving cars, see my article:

I’d almost buy into that logic except for the aspect that we don’t know this to true that Level 4 self-driving cars will have such a robust range of driving capabilities. I would bet that the first rollouts of Level 4 self-driving cars are not going to have a robust range. Instead, it will be a much narrower range. It won’t likely be years before we have Level 4 self-driving cars that can meet this claim of being superman-like drivers that can cope with all usual kinds of ODD’s or circumstances.

Meanwhile, we are back to my point that once the AI reaches its boundaries on the Level 4 set of ODD’s, if we have removed the driving controls then you don’t have any chance of having a human driver take over. At least if you had the driving controls, you could presumably then have the chance of a human driver taking over, albeit I realize that the human driver might not be the “best” option per se, depending upon the situation at-hand. It could be that some kind of litmus test be crafted to ascertain whether the human should be able to take over the driving controls, though this is another can-of-worms as to what that litmus test might be and whether it is somehow biased or inappropriate.

Suppose we then all agree that Level 4 self-driving cars should have driving controls in them. Unfortunately, this has adverse consequences and does not solve all of our problems.

By having the driving controls in a Level 4, it takes us back onto the rather untoward turf about the potential of co-sharing the driving task. As already mentioned about Level 3, the co-sharing is dangerous due to the need to convey and communicate between the AI and the human about the driving effort. A wrong hand-off can lead to life-or-death consequences.

You could make the same case about the Level 4 that has driving controls available for a human. On the one hand, you might say that it is entirely different than a Level 3, because the AI engaged in a Level 4 is not going to just routinely handover control to the human driver. The AI would either do so if it determined it had reached the boundary of the ODD’s, and notably doesn’t actually need a human driver since the AI will otherwise perform the fallback to a minimal risk condition, or the human might opt to disengage the AI and take over the self-driving car.

Here’s though where there is a kind of loophole. Suppose the human in the Level 4 self-driving car opts to disengage the AI and takes over the driving, which they can presumably do if there are driving controls. This driver drives for a mile and then decides to re-engage the AI. The AI drives for a little while, and the human decides to dis-engage it again. The human keeps doing this, opting to sometimes use the AI and sometimes not. This is obviously a kind of separate but co-sharing kind of driving effort, though a rather perhaps disturbing and abrupt kind of driving practice.

If you don’t have driving controls in the Level 4 self-driving car, the human cannot try this kind of ping pong match of alternating between AI self-driving and human driving. Instead, the human is relegated to relying entirely upon the AI and will just need to live with the aspect that at times the Level 4 is going to reach its ODD boundaries and come to a reasoned halt.

Some would say that having driving controls in a Level 4 self-driving is going to be like “click bait” (you know, click bait is when on the web they try to trick you into clicking on a web site or link). People that are inside a Level 4 self-driving car are going to be tempted to take over the driving. You know they will. For example, Joe might not like how the AI is driving the self-driving car, such as maybe it is taking him to work, and he is already late getting to work, so Joe takes over the driving to speed along at 90 miles per hour, doing so illegally, which the AI presumably would not do.

By providing the driving controls, you open the can of worms to essentially explicitly proclaim to humans in a Level 4 self-driving car that there are times that they might want to drive the car or maybe even need to drive the car (such as when parked at the side of a road due to the emerging snowy conditions).

Is that the message we want to send?

But, without driving controls, you are then knocking out of the picture the possibility of a human driving, doing so in situations wherein we might all agree that it would have been handy to allow a human to have driven that Level 4 self-driving car.

What a Self-Driving Car Gets into an Accident

One such situation involves when a self-driving car gets into an accident of some kind. With conventional cars, you often have a chance at maneuvering a damaged car out of an accident scene by using the steering wheel and perhaps the brakes and accelerator. Without any driving controls in a Level 4, many in law enforcement and the fire departments and auto tow are all concerned as to how a self-driving car is going to be gotten out of the way.

For more about accidents and AI self-driving cars, see my article:

For accident scene traversal, see my article:

I realize this is a somewhat dizzying matter of trying to decide whether driving controls should or should not be allowed for Level 4 self-driving cars.

I’ll bet that you are ready now to consider the Level 5 self-driving cars and whether they should or should not have driving controls. If you are hoping that the Level 5 is a much easier case, well, as mentioned earlier, you might experience some tears rather than joy when I say that it is not an open-and-shut case either.

Let’s first establish that the Level 5 is unlike the Level 4 in that the Level 5 must not have any ODD-specific circumstances and therefore is considered “unconditional” in terms self-driving capabilities. This would seem to mean that the Level 5 is always going to be able to drive the self-driving car. But, there is an important small-print caveat that many fail to cite or remember.

As per the SAE standard, a Level 5 self-driving car is supposed to be able to drive the car for all driver-manageable on-road conditions. Notice that there are then two important caveats. One caveat is that the driving circumstances must be driver-manageable. The other caveat is that the driving situations apply only to on-road and not off-road occasions.

We can argue somewhat about what is a driver-manageable circumstance.

I remember that when I was younger, I accidentally slid off a mountain road in my car due to unseen black ice and plowed into a small snow bank. After recovering from the shock of having the car slide without my being able to control it, I tried to back the car out of the snow. I tried and tried but couldn’t get it to happen. A highway patrol car came along, fortunately, luckily, and two officers got out of their vehicle to take a look at my predicament. At first, they were going to try and tow me out. One of them bet the other that he could drive it out.

Sure enough, the brazen braggard was able to drive my car out of that snow bank. As the two officers drove off, I’m pretty sure that one was giving the other a few bucks after having lost their own personal bet on whether it was possible to drive out or not. I certainly was happy. I didn’t have to wait for a tow truck. I didn’t have to pay for a tow truck. I unquestionably didn’t feel disappointed in myself that I couldn’t drive out my car, since I figured this highway patrol officer had done this a million times and had honed his driving skills for just this kind of moment.

This highlights the difficulty in deciding what is a driver-manageable circumstance. I was sure that my car was stuck in that snow bank and could not be driven out. The other officer was also sure that my car could not be driven out, and he knew a lot more about snow related driving than I did. Yet, it was indeed possible, as we found out when the bragging officer was able to accomplish it.

My snow bank example also encompasses the other caveat of Level 5 self-driving cars, namely that the driver-manageable aspects apply to only on-road situations. Obviously, in my snow bank immersion, I had gone off-road. So, I was in a predicament that covered both being perceived as not driver-manageable and that was unarguably off-road.

Could an AI self-driving car of a Level 5 have been able to drive out of that situation? Per the definition of Level 5, it is not required to be able to do so. Presumably, the AI could be written such that it will try to drive in situations outside the scope of the definition, but there’s not a requirement that it do so.

I also don’t want you to get misled into thinking that the driver-manageable aspects are only pertinent when an off-road situation arises, which maybe my snow bank example implies.

Here’s another personal example of a driving situation and solely illustrates the driver-manageable aspects. I went to a college football game one day and opted to try and park my car in a “fake” parking lot that someone had ingenuously devised to try and make a few extra bucks. It was actually an asphalt basketball court, but an enterprising local had turned it into a parking lot for those desperately searching to find a place to park their car.

The money-making parking lot attendant waved me into an available spot and I thought at the time that I had gotten quite lucky. This impromptu parking lot was just a block away from the stadium. People parking in the stadium’s official parking lot were paying more for their spots and were parking much further away from the actual stadium itself.

After getting settled into my seat for the football game, I got an urgent call that required me to leave the game early. I hurried back over to the impromptu parking lot. When I got there, I stood for a moment in dismay and disgust. The money-maker had packed that asphalt parking lot in such a manner that it maximized his income, but it made things seemingly impossible to get a car out of the pack. I appeared to be completely boxed in.

I suppose that if I had come out once the football game was over, there would be other drivers also wanting to get their cars out of the parking area, and so gradually we could have all figured out a means to undue the parking squeeze-in. In this case, I was the only one that wanted to leave early. The parking lot attendant was nowhere to be seen. He had made his money and hightailed it out of there (probably wasn’t even his property to use anyway).

I considered my car to be not driver-manageable in terms of getting it out of that tight knit parking puzzle.

Stuck in the Fake Parking Lot

In my mind, the only means to get my car out would be to have a helicopter come overhead and lift it out. As I pondered what to do, a passerby came along and saw me standing there with a look of angst on my face. He stopped to chat and upon sizing up the situation, he told me that he estimated that I could drive the car out – it would require painstaking steps of inching back-and-forth, along with his guiding me by standing outside of my car and waving his hands to signal me to turn the wheel.

I was a bit worried since my car was ding-free and a relatively new car. I thought that maybe he was right that it could be driven out, though I was also pretty sure that I would end-up banging against the other cars in doing so. Turns out that the good news is that with his help, I was able to extricate my car, which took nearly 30 minutes to do. I realize that’s a long time to get your car out of a rather miniscule parking lot, but the football game was going to last for another two hours and so it was worthwhile to get my car out when I did (the alternative, waiting for the end of the game, would have been much longer).

I wasn’t off-road. I was on a very navigable asphalt lot. My hitch was that I didn’t believe the situation was driver-manageable.

Driver-manageable as a measurement is somewhat in the eye of the beholder.

Alright, let’s now consider the matter of whether a Level 5 self-driving car should or should not have driver controls.

You might say that a Level 5 self-driving car should not have driver controls. This belief would be based on the notion that there are no ODD’s per se for a Level 5 self-driving car, and thus it would seem that the AI will always be able to drive that self-driving car. If that’s the case, there’s no need for human driver controls.

Also, by removing the driver controls, you are making it clear to any human occupant of the self-driving car that they are fully at the hands of the AI. There is no chance of them trying to take over the controls. Sit back and enjoy the ride, you human.

It also frees up interior space. As mentioned earlier, an auto maker can redesign the interior and make it much more conducive to a wider range of human endeavors, more so than a conventional car or any kind of car that otherwise has driver controls in it.

Rather convincing.

Why would anyone consider putting driver controls into a Level 5 self-driving car?

As mentioned earlier, the AI for a Level 5 only has to be able to handle “driver-manageable” driving situations. Suppose there are situations that appear to be not driver-manageable, and yet could be driven by a human? Likewise, the Level 5 does not need to be able to deal with off-road situations, and yet those off-road occasions might actually be drivable.

In the story of my parking lot spectacle, it could be that the AI might have informed me that my Level 5 self-driving car was stuck in the pack and there was no means to get it out (let’s pretend it was a Level 5 self-driving car). That’s a fair assessment by the AI and meets the definition of a Level 5 self-driving car in terms of driving capabilities. I would have been stuck with a multi-ton paperweight and would have had to wait until the football game ended and people came to move their cars.

In the story of my snowbank debacle, it could be that the AI might have informed me that my Level 5 self-driving car was stuck in the snow and it was considered off-road. That’s a fair assessment by the AI and meets the definition of a Level 5 self-driving car in terms of driving capabilities. I would have likely had to wait for a tow truck to yank my car out of the snow or perhaps get the officers to try and tow me out of the snowbank with their patrol car.

Notice though that in both of those stories, in the end, a human driver was actually able to drive the car successfully. If the car had been an AI self-driving car of a Level 5, the topmost level of the scale, and if there had not been any driver controls available, no human driver would have been able to drive the stuck car out of its mired position.

I am sure that some pundits will be yowling that my examples are obscure, and they perhaps overplay the aspects of what constitutes driver-manageability and also what constitutes off-road. I’m not so sure their claims in that regard are quite so valid.

There are some too that say that a Level 5 is not precluded from driving while off-road, meaning that the definition only says that it does not have to be able to do so. Furthermore, we can presumably argue until the cows come home as to what a driver-manageable situation consists of. Pundits would say that if the average adult driver can drive it, that’s considered a driver-manageable situation.

Also, pundits would say that with the use of OTA, it would be feasible that your Level 5 self-driving car might be able to get a downloaded system patch or special update that would allow it to drive like the highway patrol officer. In other words, in my case as a normal adult driver and not being able to drive out of the snowbank, presumably a bit of added program code could be downloaded into my Level 5 self-driving car and the add-on would make my Level 5 self-driving car as proficient as the highway patrol officer.

In that case, one could argue that AI self-driving cars will one day be better, likely much better than the average adult driver and be able to negotiate all sorts of seemingly non-manageable driving situations due to having superior AI driving skills.

Yes, it could be that ultimately and eventually we’ll have AI self-driving cars that are masters at the driving task. I’d dare say though that we are not going to get there right away. Instead, there will be purported Level 5 self-driving cars that are not particularly as robust a driver as the average adult driver and therefore we’ll be witnessing those Level 5 self-driving cars get into situations that are human driver manageable but not strictly-speaking AI driver-manageable.

We should also add to this argument that just as with Level 4 self-driving cars, if a Level 5 self-driving car gets into an accident, it is unclear whether the Level 5 is going to be able to extricate itself from the accident. Is that considered a driver-manageable task?

Most would say that once an AI self-driving car has gotten into an accident, it is considered then no longer bound by the rules of being driver-manageable. If that’s the case, how will first responders deal with the Level 5 self-driving car? Might the then paperweight rendered Level 5 self-driving car be a hazard to others and the traffic, even if it perhaps is still drivable, but just not so by the AI?

I’ll also chuck into this debate the aspect of whether the AI self-driving car at a Level 5 or even a Level 4 might have some kind of bug in it that makes it undrivable by the AI. You might say that the Level 4 or Level 5 is supposed to be able to self-diagnose and realize it is in a buggy state and then aim toward the fallback and a minimum risk condition.

This ability though to even realize it needs to do a fallback could be part of the bug. Likewise, the bug could be in the system portion that is trying to get the self-driving car into a minimum risk condition. Under such circumstances, it might be handy to have driver controls, albeit again the other earlier issues about whether there is a proper human driver available to make use of them.

For various kinds of bugs that might be in AI self-driving cars, see my article:

For the debugging of AI self-driving cars, see my article:

For the dangers of a freezing up AI self-driving car, see my article:

For cognitive AI aspects that can go awry, see my article:

Have I now convinced you that the idea of not having driver controls in a Level 5 self-driving car is at least something worthy of re-considering?

I ask you this aspect because there are some that are absolutely convinced that a Level 5 self-driving car should not have driver controls. When you ask them why they believe this, they usually don’t really know. It just seems that they’ve bought into a futuristic theme of not having driver controls. For those of you that have more carefully pondered the pro’s and con’s of having driver controls, I applaud you for trying to figure this out with some logic and aplomb.

Let’s shift the discussion now towards other relevant tangents associated with this question about driver controls.

Some might say that if you were to allow for a remote operator that could drive an AI self-driving car, you would not need driving controls inside the self-driving car and yet could still allow for a human to drive the self-driving car when needed. This then seems to solve our dilemma. You leave out the driving controls that are normally within the self-driving car, which frees up the interior space and effectively proclaims to the human occupants that they are not going to be driving the car, ever.

Sorry to say that I don’t buy into the remote operator approach. There are too many downsides. In what circumstances would the remote operator be able to take over the driving control? Can the remote operator really properly be able to “see and sense” the surroundings of the self-driving car to be able to suitably and safely drive the self-driving car? If the self-driving car is otherwise already damaged, would the remote operator have full sensory capability at-hand? Suppose the self-driving car is out of range of electronic communications and the remote operator cannot connect with it? And so on.

I’ve debunked the remote operator as a savior concept and have doubts that it will become a big thing, which I realize that some pundits are already claiming it will be and are hiring like mad to staff-up for thousands upon thousands of remote driver operations for self-driving cars. We’ll see how that pans out.

For my article about remote piloting of AI self-driving cars, see:

Who knows, maybe your next job will be as a remote driver operator for a rapidly expanding AI self-driving car company. Good luck on that.

For future jobs due to AI self-driving cars, see my article:

Okay, let’s consider some other alternatives about the driving controls conundrum.

We have so far assumed that driver controls consist of a steering wheel, a brake pedal, and an accelerator pedal. Those are the customary kinds of driver controls.

Reinventing the Self-Driving Car Driver Controls

Maybe we should reinvent the driver controls.

Let’s imagine that the AI self-driving car of a Level 5 has parked in that basketball court asphalt parking lot. It did so when it was easy to park there, since there weren’t other cars blocking it or making it hard to do so. You come out to your Level 5 self-driving car and plead with it to try and extricate itself from the parking lot. It tells you that it cannot do so and has gotten boxed in. The AI explains to you that the self-driving car is now in a non-manageable driving situation.

Remember when I told the story earlier and I had mentioned that a passerby helped me to get out of the parking lot, doing so by giving me a series of inch at a time instructions?

We can use that same concept with the Level 5 self-driving car.

Perhaps the AI ought to let a human provide driving instructions to it, allowing a human-assisted or human-guided driving act to occur when the AI has gotten stumped on a driving task. You might for example stand outside of your Level 5 self-driving car and tell it to back-up an inch, then tell it to stop, then tell it to move the wheel to the right, then stop, and so on. You could converse with the AI to help provide an indication of how to drive the car.

In that manner, even if the AI had reached the end of its driver-manageable capability, it could be augmented by a human. Is the human now driving the car? Not really. The AI is still driving the car. In that sense, we might say that the AI is still “responsible” for the driving act. I mention this point because once we’ve opened the pandora’s box of having the human instruct the AI on how to drive, we are confronted with all sorts of other potential problems. Suppose that the human tells the AI to drive the self-driving car into a wall or over a cliff?

For the responsibility aspects of AI self-driving cars, see my article:

Anyway, the point being that you don’t necessarily need to have physical driving controls inside the self-driving car and could potentially instead allow a human to instruct the AI on how to drive the self-driving car. There are numerous pro’s and con’s to this approach and I’d like to make clear that it is not some kind of slam dunk solution. I’ve written and spoken extensively about the trade-offs and won’t repeat those other facets herein.

For conversing with an AI self-driving car to give driving commands, see my article:

For the socio-behavioral aspects of humans instructing AI self-driving cars, see my article:

For Machine Learning aspects about self-driving cars, see my article:

For more about how humans interact with AI self-driving cars, see my article:

One counter-argument about this notion of a human conversing with the AI to have the AI then drive at the directed instructions of the human involves the possibility that the AI itself is out-to-lunch and therefore unable to obey your commands. Suppose the AI is buggy or has gotten busted in an accident, and you want to try and drive the car but the AI is so messed-up that it won’t listen to you or gets confused by your commands – it’s a Catch-22 because the AI is not going to be responsive to your commands and yet its the only means allowed for you to drive the car. This is why some argue that physical driving controls are a better choice (though, one can of course argue that even physical driving controls can breakdown too).

Since we are on the topic of being able to communicate with the AI self-driving car, and perhaps do so in a human-guided fashion, I’ll bring up another option that’s a bit farfetched for some. Rather than you speaking to the AI of the self-driving car, and rather than maybe typing messages into a console or texting them to the AI, suppose instead that you could “think” to the AI? By this I mean that the AI could somehow read your thoughts. Remember I forewarned you this was a stretch.

In any case, there are increasing advances in being able to read various brainwaves. It is believed that someday we’ll be able to communicate with automata and other humans via thinking alone. I’ve written and spoken about this matter and won’t repeat all the pro’s and con’s herein but suffice to say that it is certainly an interesting concept and one that maybe one day we’ll live to see happen. This futuristic approach is commonly referred to as brainjacking.

For my article about brainjacking and driving an AI self-driving car, see:

Another idea is that maybe the driving controls would be physically present in the Level 5 self-driving car but be foldable or hidden from use when so desired. Imagine that an auto maker made a steering wheel that folds into the dashboard and is no longer readily visible or usable. When you need to use the steering wheel, you can open the dashboard panel that houses the steering wheel and pop the steering wheel into place. The same would be the case with the brake pedal and the accelerator pedal.

Here’s the way this would presumably work. You are waiting for a Level 5 self-driving car to come pick you up to take you to the store. It arrives. There does not appear to be any visible driver controls. You get into the Level 5 self-driving car. Away it goes. It parks at the store and awaits your finishing your shopping trip. When you come out with all of those bags of groceries, you notice that your shiny new Level 5 self-driving car has gotten boxed in (similar to my story about parking for the football game).

The Level 5 self-driving car indicates to you that it cannot do anything because the situation is not driver-manageable. You opt to get into the Level 5 self-driving car and tell it that you are going to take over the driving. You then foldout the steering wheel and pedals and proceed to extricate the self-driving car from the parking spot, doing so entirely on your own and without any assistance by the AI. Once you’ve done so, you fold back in the driving controls and tell the AI to take over the driving task henceforth to get you home.

I’ll point out that this is again not a slam dunk kind of solution. Under what circumstances should these hidden driving controls be allowed to be used? Suppose a child is alone in the self-driving car and opts to foldout the driving controls and take over the driving? I think you can see that this has a number of tradeoffs as a solution.

I’d like to mention one interesting aside about these situations wherein a human might be guiding the AI of a self-driving car. You could say that such moments are teachable moments. In essence, maybe we should use those situations to have the AI get better at driving the self-driving car. If that was the case, the number of instances in which the AI got stumped would presumably gradually diminish, since its repertoire of driving skills would be increasing.

For humans helping to teach AI self-driving cars via Machine Learning aspects, see my article:

For learning and robomimicry, see my article:

For more about Machine Learning and AI self-driving cars, see my article:

How else might we deal with driver controls?

Foreseeing Multi-Mode Self-Driving Cars, With and Without Driver Controls

It could be that an auto maker might decide to offer two different models of their Level 5 self-driving car. One version has driver controls, the other version does not. This then leaves the choice up to the consumer or buyer of the self-driving car. If the purchaser thinks that having the driving controls is an advantage, they will buy that version. If the purchaser believes that having driving controls is not needed, they will buy that version.

We might therefore let the marketplace decide. This though has its own tradeoffs. Would the buyer even understand the significance of getting or not getting the driver controls? Would the aspect that some version of the same Level 5 car has driving controls while others do not be overly confounding and confusing for all?

For my article about the marketing of AI self-driving cars, see:

For the auto maker, it would also be a pain-in-the-neck. The cost to have two different models will certainly heighten the overall costs of their Level 5 self-driving cars. The manufacturing and assembly are bound to be different. The AI would likely also need to differ in terms of the version for the driving controls versus the one that does not have the driving controls. Probably a bit of a nightmare.

Of course, auto makers do have variants of their conventional cars, ones that offer one set of features versus a different set, so this idea of omitting or including driver controls is not totally farfetched.

Speaking of features, you should consider the Level of the self-driving car as essentially being a kind of feature of the car. Here’s why.

Keep in mind that the Level 4 and Level 5 standard definitions apply to the self-driving car when its AI system is engaged, meaning that presumably you don’t necessarily need to engage the AI. So, if I tell you that a particular self-driving car is a Level 5, I mean that it is able to have the AI do the self-driving at the Level 5 level, but only when the AI self-driving system is engaged.

This brings up a crucial question for the auto maker, namely, should they or should they not allow for the AI on-board system to be able to be disengaged at the whim of a human?

An auto maker might decide that the only means of the Level 4 or Level 5 self-driving car being usable at all will be when the AI driving is engaged and otherwise their Level 4 or Level 5 self-driving car is not going to budge an inch. Thus, in that sense, the AI system is considered always engaged and never able to be disengaged, with respect to the driving of the car.

A different auto maker might decide that their Level 4 or Level 5 self-driving car is able to be used by a human driver whenever the human wants, in which case the AI on-board system is to be disengaged first, prior to the human driver taking on the driving controls. When the human decides they don’t want to drive the self-drivable car at some point, the human then engages the AI system.  This act of engagement and disengagement might involve pressing a big red button that’s inside the car (well, it could be any kind of button or similar kind of on/off switch), or it might involve sending a signal to the self-driving car via your smartphone, or you might verbally tell the self-driving car that you want the AI engaged or disengaged.

Some believe that a Level 4 or Level 5 self-driving car should always be driven by the AI and never be allowed to be driven by a human, therefore the AI is considered always engaged and cannot be disengaged no matter what. Others suggest that if a person wants to have and use a Level 4 or Level 5 self-driving car and do so by driving it themselves, well, they ought to be able to do so. The human is able to decide whether they themselves want to drive the car or let the AI do the driving.

I think you can see that this is another related point of contention in the debate over driver controls.

If you’ve been with me throughout this whole journey of debating the inclusion or exclusion of driver controls, I applaud your tenacity. As I mentioned at the start of this discussion, it’s a very popular question and one that unfortunately does not have a twitter-sized answer per se, other than to say “it depends.”

For now, I’ll leave you with one final thought on this matter.

Will people be willing to give up the use of driving controls?

I ask this because we might devise all sorts of clever ways to avoid having to provide driving controls, and yet in the end it might be that people insist on having them. I realize this idea of letting people have driving controls is somewhat antithetical to the Utopian world of a future that has presumably no car accidents because we have eliminated the human driver from the equation of driving.

There are some that keep insisting we will have zero fatalities due to the advent of AI self-driving cars.

For the Level 3 and below, there are still going to be car crashes and deaths, since the car driving is co-shared between humans and the AI. Even for the Level 4 and Level 5, there will still be car crashes and deaths. How can that be?

Keep in mind that there are avoidable car crashes and unavoidable car crashes. An example of an unavoidable car crash is when a pedestrian suddenly and unexpectedly steps off the curb into the path of a car that is going say 45 mile per hour and does so with only a split second of reaction time. No human driver and no AI driver is going to be able to avoid such an unavoidable car crash. Hopefully, we’ll have a lot less of those unavoidable instances, but in any case, it isn’t going to drop to zero.

For my article about the zero fatalities myth, see:

For my article about the safety aspects of AI self-driving cars, see:

Anyway, let’s go with the notion that if we did have an all and only Level 5 world of AI self-driving cars, and denied humans from driving, it would hopefully eliminate the drunk driving and other such human foibles of driving that leads to deaths and injuries, and we would dramatically decrease the number of annual deaths due to driving incidents.

I’ll repeat my question, namely will people be willing to live in such a world without any human allowed driving controls, or might they insist that you’ll remove their driving controls over their dead bodies (a play on the “you’ll remove my gun over…”). Would the government and regulators be able to mandate that driving controls are no longer permitted? Maybe the public would accept this notion in return for the decreased number of deaths. I’m betting though that a segment of society will still believe in the “right” to be able to drive a car (yes, I realize it is considered a privilege rather than a right, but I’m saying members of the public-at-large might consider it to be a right).

Perhaps society accedes to those that want to have driving controls, but they then can only use those driving controls in certain settings. If you are on private land, maybe you can use them there, but not when on public roads. If you attempt to use them when they are not legally permitted, either the AI stops you from doing so, or if you get caught you get a hefty ticket and maybe jail time. That kind of thing.

From an AI perspective, much of this debate about the driving controls on AI self-driving cars deals with a much larger macroscopic question about AI systems overall. Allow me to explain.

As more AI systems get devised and rolled out, we need to be aware of the AI boundaries problem. Simply put, when the AI has reached the end of its rope and can no longer perform whatever task is involved, what happens then? Do we hand the situation over to humans? In what way is this best done? Will the humans know what they are doing?

Along those lines, there are some that are worried that with a mix of conventional cars and AI self-driving cars, and if we eliminate the driving controls on some of the AI self-driving cars, it could imply that gradually people that are actually able to drive will have their driving skills decay because they are becoming reliant on the AI to do the driving. If someone is used to being chauffeured most of the time, and then they are suddenly asked to drive a car themselves, the odds are that their driving skills are lessened. This could also mean that they might be more apt to get into car accidents and car crashes.

You can say the same about most other tasks that an AI system might be devised to undertake. Suppose we put an AI robotic arm in a manufacturing plant and humans no longer do what the robotic arm does. If for some reason you need a human to do that task, will humans be available that know how to do so? Will they still be proficient? Might they and others get injured due to the lack of proficiency?

Some are worried that we are heading toward a world of AI that dominates what we do, and we as humans will gradually lose our own skills at doing things. When the AI falters or falls apart, humans will be stumped as to how to do those tasks. We are going to by default allow AI to let humans become deskilled.

In the case of the car driving task, it is important because the AI is not yet able to truly do whatever a human driver can do. We are a long way from that happening. In the meantime, we’ll be using AI that is relatively brittle and narrow in what it can do as a driver. Will we meanwhile become deskilled as drivers, and yet think that we can jump behind the driver’s wheel and drive if we wish to do so? It could be that the deskilling on a mass scale would lead to havoc as those drivers that were once seasoned have reverted to being novices.

Don’t want to scare anyone about that AI boundaries issue, and I’m not saying the sky is falling, but it is something that as AI developers we should be considering. I’m a proponent of the belief that AI developers should take responsibility for what they do and what they are making, and I ascribe to various professional association codes of conduct that argue for that avowal.

For thoughts on the code of conduct aspects for developers, see my article:

Guess what? You are now officially indoctrinated into the ongoing debate about car driving controls.

Congratulations! You made it through the quagmire and can now get involved directly into the at-times acrimonious dialogue. If you were looking for a quick answer about whether or not driving controls should be included or should be excluded from the topmost levels of self-driving cars, my answer is yes.

Wait, was Lance saying that yes they should be included or yes they should be excluded.

Ask the AI what it thinks.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.