Human In-The-Loop Vs. Out-of-The-Loop in AI Systems: The Case of AI Self-Driving Cars

2129
Credit: Action Press/Rex

By Lance Eliot, the AI Trends Insider

The Viking Sky cruise ship is a statuesque vessel that was built in 2017. Unfortunately, it got itself into hot water recently. In March 2019, while operating in the freezing cold waters of the North Sea coast off of Norway, the ship became disabled and a frightful rescue effort took place to lift the passengers to safety via helicopter, at night time, in pitching seas, and for a ship that was carrying 1,373 passengers and crew. Not the kind of adventure that most likely are seeking on such cruises.

Promoted as a comfortable and intimate cruise ship that was designed and built by experienced nautical architects and designers, the beam is about 95 feet in size and the length is about 745 feet. Constructed in modern times, it is a state-of-the-art sea faring ship that has the latest in capabilities and equipment. The desire was to have a ship that could enrich the cruising experience.

What went wrong on this particular voyage?

According to media reports, the preliminary analysis indicates that the ship was relatively low on oil, which normally would not have been an emergency factor per se, but the heaving seas and the sensors on-board the vessel led to an intriguing and design-questioning misadventure.

Turns out that the sea-heaving sloshing around of the oil in the tanks was so significant that the oil-level sensors triggered that the amount of oil was dangerously low, nearly non-existent. If you don’t have enough oil, it’s like a car engine, namely that without enough oil in your car, the engine cannot have sufficient lubricant and you are at risk of your engine overheating and conking out, along with the possibility of severe damage to your engine that will be costly to repair or replace. It could even cause other damage, possibly even start an internal fire, etc.

The sensors conveyed the dangerously low oil level by signaling the engines to shut-down.

Apparently, this is an automated aspect that involves the on-board sensing system forcing a shut-down of the engines. There is seemingly no human involvement in the process. It is automatic. One presumes that the architects and designers reasoned that if the engine is going to conk-out and be wiped out, presumably when the oil is dangerously low or nearly non-existent, the prudent thing to do is pull-the-plug on the ship’s engines. Makes sense, one would assume.

In a grand convergence of bad luck, this engine automatic shut-down happened just as the cruise ship was in the midst of a storm and perchance was not near a port, though it was near to land, but you can’t just dock a cruise ship anywhere. The captain decided to put down the anchor to keep the ship from drifting toward the shore and hitting abundant deadly rocks. The anchoring did keep the ship in place, but you can also imagine the corresponding problem it creates, becoming a bobbing cork in heavy seas and now being unable to try and navigate around or over the life-threatening waves.

The good news is that no one was killed and ultimately everyone was saved. The late-night helicopter operation rescued about 479 passengers off the cruise ship. This took time to achieve, and by then the seas had calmed enough to undertake sea going efforts for the rest of rescue instead of the more daunting air rescue approach.

Imagine though the stories you could tell about your cruise. Instead of the rather typical 12-day mundane cruise with picture after picture of scenic skies and the excessive drinking of martinis, the passengers have a shocking “all’s well that ends well” tale that will make them the stars of most-harrowing cruise ship vacations. For more media coverage about the event, see: https://www.latimes.com/world/la-fg-norway-cruise-ship-sky-20190327-story.html

There are some fascinating lessons to be learned in this story about the Viking Sky.

Keep in mind that a complete investigation has not yet been undertaken (at the time of this writing herein), and so the details are still sketchy. I hope you’ll excuse my willingness to interpret what we know now, even though the existing details might either be incomplete or the media might have misstated matters. Nonetheless, I think we can rise above the specifics herein and aim to ferret out potential lessons, whether or not they actually are imbued in this particular event.

Lessons About Humans In-The-Loop vs Out-of-The-Loop

It has been reported that the oil level sensors apparently automatically forced an engine shutdown. There seemed to not be any human involvement at the time in making the decision to do so. You might at first glance assert that there wasn’t a need to have any human involvement in this decision, since the right thing to do was indeed to shutdown the engines, doing so before they overheated, conked out on their own, and possibly caused other damages or sparked a fire.

Case closed.

Not so fast! Remember that the oil level was supposedly relatively low, but not entirely nonexistent. The heaving seas were claimed as sloshing around the oil in the tanks and led the sensors to believe the oil was dangerously low. I’m sure you’ve done something like this yourself on a smaller scale basis, whereby you sloshed around liquid in a drinking glass, and at one moment the bottom of the drinking glass appeared empty, while moments later the liquid flowed back into the bottom, and the glass was not fully yet empty.

Perhaps there was sufficient amount of oil in the ship’s tanks that the engine did not need to be shutdown immediately.

We don’t know for sure that the oil level was truly that dangerously low. I realize you can try to argue that the oil sloshing is another kind of problem, and even if the amount of oil was still sufficient, it is conceivable that the sloshing of the oil would make it difficult or hamper the flow of the oil from the tanks to the engine. Gosh, though, you would kind of assume that a ship builder would know about the sloshing possibilities for an ocean-going vessel, wouldn’t you?

In any case, let’s pretend that the oil was sufficient to keep the engine going, albeit maybe only a brief period of time, nonetheless the possibility of continuing to use the engines for some length of time still existed (we’ll assume).

If so, the captain presumably could have further navigated the ship, again only shortly, but it might have made the difference as to where the ship could have gotten to and potentially anchored. The sensors that were setup to automatically cause the engines to shutdown might have shortchanged the chances of the captain taking other evasive action.

Another interesting element is that the captain or other crew members were seemingly not consulted by the ship’s systems and instead the whole matter played out by automation alone. As far as we know, the automated system “thought” that the engines were not getting sufficient oil and therefore the automated approach involved shutting down the engines.

Suppose that the captain or crew knew that the sloshing oil was not as bad an oil-level situation as the sensors were reporting. Maybe the humans running the ship could have reasoned that the sensors were falsely being misled by the heaving seas. Those humans perhaps could have countermanded the automated engine shutdown and instead used the engines a little while longer.

Sure, you might argue that those “reasoning” humans might then have overridden the automatic shutdown and kept going too long, leading to the engines running out of oil eventually and then risking the dangers associated with not having done an earlier shutdown. That’s a possibility. But it is also possible that the humans could have run the ship just enough to seek a safer spot, and then they themselves might have engaged an engine shutdown.

We really don’t yet know whether any of those scenarios could have happened. We also don’t know if those scenarios would have led to a better outcome. Admittedly, the approach that took place was in-the-end “successful” in that no passengers or crew were lost in the emergency. It would be pure speculation that any of the other scenarios might have been safer or not.

The fascinating aspect is that this is an illuminating example of the classic Human In-The-Loop (HITL) versus the Human Out-of-The-Loop (HOTL) situation (some prefer to use HOOTL instead of HOTL as an abbreviation, but I prefer HOTL and will use it herein; a rose is a rose by any other name).

Per the media reports, the sensors for the oil-level had been crafted by the architects and designers to automatically force an engine shutdown in the case of insufficient oil. There seemed to be no provision for the Human In-The-Loop aspects. This was a keep the Human Out-of-The-Loop moment, as devised by the creators of the system, apparently.

Whenever you design and craft an automated system, you oftentimes wrestle with this tension between whether to have something be a Human In-The-Loop process or whether it should be a Human Out-of-The-Loop approach.

Perhaps the designers in the case of the Viking Sky were convinced that once the oil level got too low, the practical action was to automatically force an engine shutdown. This might have been smart to do and avoid having a Human In-The-Loop, since the human might have taken too long to make the same decision or otherwise endangered the engine and perhaps the entire ship by not taking the seemingly prudent action of immediately doing an engine shutdown.

It is also possible that the architects and designers did not even contemplate having a Human In-The-Loop on this action at all. We assume they probably did conceive of it, and then explicitly ruled out the use of HITL in this type of situation. Of course, maybe while doing the design, no one considered the HITL aspects. They might have merely discussed what to do once the oil level was near kaput, and the obvious answer was to force an engine shutdown.

Did they consider the possibility of sloshing oil that might cause the oil level sensors to misreport how much oil there was actually in the tanks? We don’t know. They might have figured this out and decided that if the sloshing was causing the oil level sensors to report that the oil was really low, it was sufficient to merit shutting down the engines. Once again, they might have made a deliberate design choice of not consulting with any humans in such a situation and decided to proceed with an automatic shutdown as the course of action.

That’s the difficulty of trying to identify why sometimes an automated system might have taken a particular automated path, namely, we don’t know if the human designers and builders reasoned beforehand about the tradeoffs of a HITL versus a HOTL, or whether they didn’t think of it, and so the system became a HITL or a HOTL merely by the happenstance of how they did the design. You would need to dig into the throes of how the automated system was designed and built to discern those aspects.

Trying to find out how a particular automated system was designed and developed can be arduous after-the-fact. There might not be documents retained about how things were devised. The documents might be incomplete and lack the details explaining what was considered. Usually, documentation is primarily about what the resulting system design became, rather than the tradeoffs and alternatives that were earlier considered. This usually is only found by directly speaking with the humans involved in the design efforts, though this is also murky because different people can have different viewpoints about what was considered and what was not considered.

For the moment, I’ll leave to the side a slew of other questions that we could ask about the cruise ship tale. Maybe the design stated that the humans should be consulted if an oil level was going to trigger an engine shutdown, but the developers didn’t craft it that way, either by their own choice to override that design approach or by inadvertently not paying close attention to the design details. You cannot assume axiomatically that whatever the design stated was what the developers actually built.

One can also wonder what the provision might have been for false sensor readings.

In this case, the sensors were misleading in terms of not being able to apparently discern that the oil was sloshing around, and we might question why this was not considered as a design factor (maybe it was, and the decision was that it might be overly complicated or costly to deal with).

Suppose too that the sensors had some kind hardware faults that caused them to claim the oil was dangerously low, and yet the oil was actually quite full, did the designers consider this possibility, and if so, would they have at that juncture designed the system to do a Human In-The-Loop to verify what the sensors are claiming, or would it still be a HOTL?

My overarching point is that when you are developing automated systems, there needs to be a careful examination of the advantages and disadvantages of a HITL versus a HOTL. This needs to be done at all levels and subsystems. I say this because it is rare that you could reach a conclusion that all of the varied parts of an automated system would entirely be HITL or entirely be HOTL. The odds are that there will be portions for which a HOTL might be better than a HITL, and portions whereby a HITL might be better than a HOTL.

I mention this too because I know some AI developers that tell me they never trust humans, which means that any system is presumably better off to go the Human Out-of-The-Loop approach than the Human In-The-Loop. That’s the attitude, or shall we politely say “perspective” that some AI developers take.

I can sympathize with their viewpoint. Any seasoned developer has had their seemingly perfectly crafted system undermined by a human at one juncture or another. A human dolt stepped into the middle of a system process, interrupted the system, and made a bad choice, making the system look rather stupid. The developer was irked that others assumed the system was the numbskull, when the developer knew that it was the human interloper was the mess-up, not the automation.

When that happens enough times, there are AI developers that become hardened and cynical about any kind of Human In-The-Loop designs. For those developers, the moment you opt to include the Human In-The-Loop, you might as well plant a flag that says big failure about to occur. You might be told by management that it is the way things will be, and so you shrug your shoulders, proceed as ordered, but know in your heart and soul it is a ticking timebomb, waiting to someday explode and backfire on the system.

The problem with this kind of “never” allow a Human In-The-Loop dogmatic view is that you might end-up with an automated system wherein the lack of a human being able to do something can result in untoward results. Perhaps the cruise ship story provides such an illustration (note: I’m not basing my entire logic though on that one story, so be aware that the cruise ship story might or might not be an exemplar, which doesn’t impact my point overall about HITL versus HOTL).

I am trying to drive toward the notion that you cannot beforehand normally declare that an automated system is entirely HITL or entirely HOTL. You need to walk through the details and figure out whether there are places that a HITL or HOTL seem to be the best choice. If you can do this and truly rule-out that the Human In-The-Loop is not the appropriate choice, I suppose at that point you can proceed with an entirely HOTL design.

For the egocentric AI developer, see my article: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For why AI developers get burnt out, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about how groupthink can impact AI developers, see: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For the noble cause corruption that can happen with AI systems, see my article: https://www.aitrends.com/selfdrivingcars/noble-cause-corruption-and-ai-the-case-of-ai-self-driving-cars/

The Perfection Falsehood Rears Its Head

I’ll also emphasize that the HITL versus HOTL question is not necessarily cut-and-dry. Many AI developers tend to live in a binary world wherein they want to make everything into a clear cut on-or-off kind of choice. Usually, the HITL versus HOTL involves gray areas, and encompasses doing an ROI (Return on Investment) comparison of the costs and benefits associated with which choice you make. It is not solely quantifiable though. There is judgement involved. It is not a pure numbers or calculus that can determine these choices.

I’d like to bring up too the “perfection” falsehood that sometimes permeates the design of automated systems.

This involves one side of the HITL versus HOTL trying to contend that either the automated system will act perfectly, or that the human will act perfectly. I’d bet that’s not going to happen in terms of the real-world is that an automated system can act imperfectly, and a human can also act imperfectly. The perfection argument is a false one that is misleading and often used to suggest an upper hand, though it is a mirage.

Let’s use the cruise ship as example, though again it might not be accurate in terms of what actually did happen.

Imagine a bunch of the ship designers sitting around a table during a JAD (Joint Application Development) session and arguing about whether to have the oil level sensors trigger an automatic shutdown of the engine. One of the louder and more seasoned designers speaks up, doing so in a commanding voice. We know that humans make mistakes, the designer proclaims, and the automation won’t make mistakes since it is, well, it is automated, and so the best choice in this case is to cut the human out of the matter.

You see how perfection is used to assert that the HOTL is the right way to go?

This can be used on the other side of the coin too. Erase for the moment the image of that seasoned designer and start the image anew.

Now, consider this. A seasoned designer stands up, looks around the room, and points out that automation can falter or go awry, and the wise approach would be to include the humans into the matter, since they will always know the right decision to be made. Those humans will consider aspects beyond what the system itself knows about and be able to make a reasoned choice far beyond anything that the automation could do.

Once again, we’ve got a perfection argument going on, in this case for the HITL approach.

We might all agree that humans have a chance at using reasoning and therefore might indeed be able to do a better selection or choice of actions than an automated system, but this also belies the limitations and weaknesses inherent in including Humans In-The-Loop.

Face it, humans are human. Let’s use the cruise ship story to showcase this aspect, which I’ll do by stretching the story to do so.

Suppose the cruise ship was designed to ask the humans what to do in the situation when the oil level sensors are reporting that the oil level is extremely low. Maybe the captain or crew might opt to completely ignore the warning and do nothing, in which case the engine conks out, and perhaps an on-board fire starts, threatening the entire ship. Bad humans.

Or, maybe the captain and crew see the warning and decide they will use the ship for just five more minutes and will then do a manual engine shutdown. Turns out though they misgauge the situation, and after two minutes, the engine conks out, becomes destroyed due to waiting too long, and even if oil could be provided now to the ship, the engine is completely useless. Bad humans.

The reality is that any automation can falter or fail, and likewise any human or humans can falter or fail.

There isn’t this perfection nirvana that is sometimes portrayed as a means to bolster an opinion about how to design or develop an automated system. Whenever someone tries the perfection argument on me, I try to remain calm, and I gently nudge them away from their perfection mindset.

It can be hard to do. For those that have had human’s mess-up, they tend to swing to the automation-only side, and for those that have had automation mess-up, they tend to swing to the Humans In-The-Loop side. The world is not that easy and not so simplistic, though we might wish it to be.

As an aside, one wonders how the captain and crew of the Viking Sky managed to allow the ship’s oil to get so low that the predicament itself arose.

I suppose the captain might try to say that it was the responsibility of the Viking maintenance team on-shore to make sure that his ship was well-stocked in oil prior to getting the ship underway from the dock, though there is that universal thing about captain’s being altogether responsible for their ships and ensuring that their ship is seaworthy. It also raises an interesting aspect that perhaps the ship designers and architects assumed that the ship would be highly unlikely to ever get that low on oil and they assumed that the cruise company and the captain would not allow such a situation to occur. Maybe a kind of “perfection” was in the minds of the ship designers about the oil aspects.

I think we can all easily imagine that a car owner might neglect to ensure that they have enough oil in their car for a driving journey, but for a cruise ship to not have sufficient oil, really? Anyway, next time you take a cruise, you might want to pack into your on-board bags a few extra quarts of oil, just in case the captain and crew find themselves needing some more oil for the ship. Let’s see, my cruise-going “To Do” list now includes my toothbrush, swim trunks, suntan lotion, five quarts of oil, oil spigot, toothpaste, and so on.

Range of Characteristics Needed For HITL Versus HOTL Debate

An upside for the Human In-The-Loop approach often involves these kinds of characteristics:

  •         Humans can potentially provide intelligence into the process
  •         Humans can potentially provide emotion or compassion into the process
  •         Humans can potentially detect/mitigate runaway automation
  •         Humans can potentially detect/overcome nonsensical automation
  •         Humans can potentially shore-up automation gaps
  •         Humans can potentially provide guidance to automation
  •         Etc.

Any of those aspects can be a bolstering toward going the HITL route and not going the HOTL path.

I don’t want you to leap to any conclusions, and so I’ve said the word “potentially” in each of the listed items. Also, again keep in mind that this is not a blanket statement across an entire system and needs to be done at the subsystem levels too.

We also need to consider the characteristics about the downsides for the Human In-The-Loop:

  •         Humans can make bad choices due to not thinking things through
  •         Humans can make bad choices due to emotional clouding
  •         Humans can slow down a process by taking too long to take an action
  •         Humans can make errors in the actions they take
  •         Humans can be disrupted in the midst of taking actions
  •         Humans can freeze-up and fail to take action when needed
  •         Etc.

You can essentially reverse those same upsides and downsides and use them to do a characteristics listing for the upsides and downsides of the Human Out-of-the Loop too.

There are some additional salient matters involved.

When designing an overall system, you need to be careful about “sneaking” HITL into subsystems that might be rarely used and having the rest of the system act as HOTL.

In essence, if humans involved in the use of a system are lulled into assuming that it is a Human Out-of-The-Loop because of a rarity of experiencing any Human In-The-Loop circumstances in that system, those humans can become complacent or dulled when the moment arises for them to perform as a Human In-The-Loop.

Examples of this are arising in the emergence of AI self-driving cars. Back-up drivers that are being employed to watch over the AI of a self-driving car are likely to assume they don’t need to be attentive, which can happen due to long periods of no need for their human intervention. The Uber self-driving car incident of ramming and killing a wayward pedestrian in Phoenix is an example of how a back-up driver can become complacent.

This also though will happen to everyday human drivers that begin to use Level 3 self-driving cars. The automation that is getting better will ironically tease humans into become less attentive to the driving task, in spite of the aspect that the human driver is considered always on-the-hook and responsible for the driving of the car. It is an easy mental trap to fall into.

For my analysis of the Uber incident: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For the early analysis that I did about the Uber incident, see: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/

For the dangers facing back-up drivers, see my article: https://www.aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For my article about the issues arising for Level 3 self-driving cars, see: https://www.aitrends.com/selfdrivingcars/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

You can also have situations whereby you’ve devised a system to be primarily HITL and then you have a “hidden” HOTL that catches a human operator by surprise.

Some suggest that the Boeing 737 MAX situation might have had this kind of circumstance.

There was an automated subsystem, the MCAS (Maneuvering Characteristics Augmentation System), which was apparently silently kicking into engagement to take over the plane controls when the automation ascertained it was relevant to do so, yet supposedly there was not a noticeable notification to the pilots and/or it was assumed that the pilots would already be aware of this subtle but significant feature.

You might say that the pilots were primarily a Human In-The-Loop situation in terms of flying the plane for most of the time, while the MCAS was more akin to a Human Out-of-The-Loop subsystem that would pop into the flying on rare occasions.

The pilots, being used to being HITL, could become confounded when a subsystem suddenly invokes a Human Out-of-The-Loop approach, especially so since it tended to occur in the midst of a crisis moment of flying a plane, compounding an already likely chaotic and tense situation.

For my article about the Boeing lessons learned, see: https://www.aitrends.com/selfdrivingcars/boeing-737-max-8-and-lessons-for-ai-the-case-of-ai-self-driving-cars/

Consider Ramifications of Human Governing-The-Loop (HGTL)

An additional salient element is an aspect that I refer to as the Human Governing-The-Loop or HGTL.

I’ve so far discussed two sides of the same coin, the Human In-The-Loop and the Human Out-of-The-Loop. We can take a step back somewhat and consider the coin itself, so to speak.

See Figure 1.

Let’s consider the cruise ship again.

Could the captain and crew have potentially turned-off the automated subsystems involved or otherwise prevented the automatic shutdown of the ship’s engines?

I don’t know if they could have, but let’s assume that they probably could have done so. There might have been some kind of master emergency switch that they could have used to turn-off the sensors, presumably preventing the sensors from triggering the engine shutdown. Or, maybe once an engine shutdown is started, perhaps there’s an emergency switch that stops the shutdown from proceeding and will keep the engines going.

I’m not saying it would necessarily have been wise for the captain or crew to take such an action. Maybe it would have been much worse to do so. Perhaps turning off the oil sensors might mean they would be blind as to how much oil they really have in the tanks and could cause the captain and crew to run the engine when it should no longer be safely running. And so on.

We can consider instead if you like the Boeing 737 situation.

It appears that the pilots could completely turnoff the MCAS. This could be good or bad. The MCAS was intended to help the pilots and try and prevent a dangerous nose-up situation. The media has reported that other pilots of the Boeing 737 had from time-to-time opted to turn-off the MCAS and did so to presumably prevent it from intervening and thought that they as human pilots could handle the plane without having the MCAS underway.

My point is that there is often a means for a human to not be per se a Human In-The-Loop and yet still be able to take action as a human that can impact the automated system and the process underway.

They “own” the coin, or at least can overrule the coin in a certain manner of speaking.

If the human can turn-off the automated system, or otherwise govern its activation, I’ll call that the Human Governing-The-Loop. I make a distinction between the Human In-The-Loop and the Human Governing-The-Loop by suggesting that the HGTL is not particularly involved necessarily inside the loop of whatever action is taking place. They could be, but they don’t have to be.

I might have a factory floor with lots of automated robots. Some of those robots are interacting with humans in a Human In-The-Loop fashion. Some of those robots don’t interact with humans at all and are considered entirely Human Out-of-The-Loop.

Suppose a manager of the factory has access to a master switch that can cut power to the entire factory. If they were to smack that master switch, power goes out, and all of the robots come to an abrupt halt. This manager is not actively involved in working with those robots and so is not technically a Human In-The-Loop in the traditional sense.

Yet, the human can do something about the automation, in this case completely halt it. I realize some of you might say if that’s the case then the factory manager is indeed a Human In-The-Loop. I don’t want to get us bogged down in a debate about this point and concede that you could say that the manager is a Human In-The-Loop, but I dare say it is somewhat misleading due to the omnipresent role that this human has.

For that reason, I have carved out another kind of human loop related role, the Human Governing-The-Loop.

You might not like it, and that’s fine. I think it useful though to consider the role and thus tend to call it out and give due attention to it.

There are some systems devised to prevent a human from trying to disable or cut-off the system, which might make sense because it is otherwise a kind of hole or gap related to what the automated system is perhaps intending to do. This might be a security system and just like spy movies you don’t want a clever crook to cut-off power and then get access to a treasure trove (spoiler alert, think about the FBI in the movie “Die Hard” and you’ll know what I mean by this).

On the other hand, if there is absolutely no means to stop or hinder an automated system, this is the nightmarish predicament you see in many movies that portray an AI system that’s gone amok. Some believe that we might be headed to a “singularity” whereby AI becomes all-powerful and there is no means for a human to stop it, i.e., no HGTL.

For my article about the AI singularity, see: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the conspiracy theories about AI, you might enjoy reading this: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

For doom and gloom about the super-intelligence and the paperclip, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my article about idealism in AI, see: https://www.aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

AI Self-Driving Cars and HITL Versus HOTL

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. For auto makers and tech firms making AI self-driving cars, the question of HITL versus HOTL is a crucial one. It needs to be explicitly considered and not just be designed or built in a happenstance manner.

Allow me to elaborate.

I’d like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of Human In-The-Loop versus Human Outside-The-Loop, let’s consider how this applies to AI self-driving cars, of which I’ve already provided a glimpse by discussing the role of the human back-up drivers, and furthermore when I discussed the emergence of Level 3 self-driving cars.

HITL and HOTL for Level 4 and Level 3

For self-driving cars less than Level 4, there must be a Human In-The-Loop design since by definition those are cars that involve co-sharing of the driving task with a human licensed driver. As a reminder, this then entails figuring out where it makes sense to best use HITL versus HOTL. In other words, not every aspects of the AI for the self-driving car will be using HITL and nor using exclusively HOTL, and instead it will vary.

Keep in mind too that there should be an explicit effort involved in deciding where HITL and HOTL belong. This should not be done by happenstance.

It might also be prudent to document how such decisions were made. Some would say that it will be important later on, in case questions are raised from a product liability perspective. Others might argue that perhaps it might be prudent to not have such documentation, under the belief that it might be used against a firm and undermine their case. Perhaps the standard answer is to consult with your attorney on such matters.

From a regulatory perspective, some of the HITL versus HOTL can pertain to abiding by regulations about the design and development of self-driving cars. Once again this highlights the importance of doing such design by purposeful manner, otherwise the AI self-driving car might run afoul of federal, state or local laws.

We have found it useful to put together a matrix of the various functions and subfunctions of the AI system and then indicate for each element whether it is intended to be HITL or HOTL. Included in this matrix would be an explanation of the rationale for which choice is being made. The matrix tends to change over time as the AI self-driving system is evolving and maturing.

In many cases, a feature or functions starts off as a Human In-The-Loop, doing so because the AI is not yet advanced enough to remove the human from having to be in the loop. Given advances in Machine Learning and Deep Learning, gradually there are driving tasks that shift from being in the hands of the human driver and instead by the “hands” of the AI system.

A number of the auto makers and tech firms are trying to evolve their way from a Level 3 to a Level 4, and then from a Level 4 to a Level 5. Thus, you might have a matrix with a lot of HITL’s that gradually become HOTL’s. Once you arrive at a Level 5, in theory the matrix is nearly all HOTL’s, though I’ll provide some caveats about that notion in a moment.

The Level 4 is a bit of a different animal because it relies upon being able to do presumably pure self-driving when within some set of stated ODD’s (Operational Design Domains). For example, a Level 4 might state that the AI is able to drive the self-driving car in sunny weather, in a geofenced area, and not at nighttime. When the particular ODD is exceeded, such as in inclement weather or at night time, in this example, the AI is supposed to either bring the self-driving car to a considered safe halt or turn over the driving task to a human.

If the human opts to then takeover the driving once the ODD is exceeded, you are back to essentially a Level 3 situation in that the human driver and the AI are potentially co-sharing the driving task. It seems unlikely that the Level 4 would simply drop down into a Level 2 mode once the AI for the Level 4 is outside of its defined ODD, and more likely that the Level 4 would be essentially the (former) Level 3 that was enhanced to become a Level 4.

As per my earlier remarks, the AI developers need to consider carefully when the HITL will come to play and when the HOTL will come to play, including be cautious about any “hidden” HITL’s or HOTL’s that rarely are intended to occur.

Some mistakenly believe that only when a HITL is going to occur do you need to alert the human, but I would argue that the same notion of a forewarning or alert should be done when the HOTL is going to happen too.

A rule-of-thumb generally is that no surprises by either a HITL or a HOTL is going to go more smoothly than a sudden surprise instance of a HITL or a HOTL.

For product liability aspects and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For federal regulations and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For the bifurcation of autonomy, see my article: https://www.aitrends.com/selfdrivingcars/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For Machine Learning and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

HITL and HOTL for Level 5 Self-Driving Cars

For Level 5 self-driving cars, presumably there isn’t any Human In-The-Loop involved, due to the notion that the AI is supposed to be able to drive the self-driving car without any human driving assistance. The Level 5 self-driving car might not have any humans inside the car at all and be driving to say get to a destination to pick-up passengers.

I mention this to point out that there might not be any humans inside a Level 5 self-driving car, which would imply by default that there is no chance to involve a human into the loop even if the AI wanted to do so.

There are various caveats that are worth mentioning, and for which I’ve often noticed pundits seem to leave out or are not considering.

First, there are some AI self-driving car designers that are opting to include a provision for remote operation of the self-driving car. The idea is that there might times at which you want a remote human driver to take over the wheel. I’ve previously written and spoken about the aspect that this can be harder to arrange than you think, and in some sense it would imply that the self-driving car is not truly a Level 5 (since it seems to be reliant potentially on a human driver, regardless of whether the human happens to be inside the car or not).

For my article about remote operations of an AI self-driving car, see: https://www.aitrends.com/selfdrivingcars/remote-piloting-is-a-self-driving-car-crutch/

If there is a provision for a remote human operator, this obviously then dictates a Human In-The-Loop need for some amount of the functioning of the AI self-driving car. The same comments about the HITL and HOTL for the Level 4 and Level 3 are equally applicable to a Level 5 that has a remote human operator that can become involved in the driving task.

Another factor about the possibility of a Human In-The-Loop for a Level 5 involves the use of means of electronic communication with a self-driving car. If the Level 5 is using V2V (vehicle-to-vehicle) electronic communications, or possibly V2I (vehicle-to-infrastructure), or possibly V2P (vehicle-to-pedestrian), these are all avenues that might encompass a human. We tend to assume that the V2V and V2I is being provided by another automated system, but that’s not necessarily the case. The V2V, V2I, and V2P can be arising from a human (I realize too that you could make the same case for the OTA, Over-The-Air capabilities).

That being said, you might argue that all of these electronic communications are not within the realm of the driving task of the self-driving car and therefore not particularly a valid kind of HITL. They are presumably advisory messages or communiques, and it is up to the AI of the self-driving car to decide what to do about those messages. The AI might use the messages in determining what driving it should do, or it might reject or opt to ignore the messages.

This dovetails into a similar kind of dilemma, namely the situation of having passengers inside the Level 5 self-driving car and what their role might be related to the driving task.

Let’s suppose that the Level 5 self-driving car has no actual driving controls for any human use. This implies that a human inside the Level 5 will be unable to do any of the driving, even if they wanted to do so. There is though a kind of way in which the passenger can impact (possibly) the driving the self-driving car, doing so via interaction with the AI system.

You are inside an AI self-driving car. You tell it where you want to go. As the AI proceeds to drive to the destination, you yell at the AI to hit the brakes because you have noticed a dog chasing a cat and those two will cross the path of the self-driving car. The self-driving car has not yet detected those two animals, perhaps because they are both low to the ground and off to the side of the road, though the human passenger saw them and deduced that they are likely to enter into the street.

Are you involved in the driving of the self-driving car?

In this case, we’re assuming you aren’t in direct control in terms of having access to a steering wheel or the pedals. But, does your verbal command become a different kind of driving control, not one in which you are using your hands or feet to control the car, and instead you are using your voice. Is your voice really that much different than having a physical access to the driving controls?

The point being that a human is presumably going to be in the loop for Level 5 self-driving cars, either by being a passenger and offering driving “commands” to the AI, which might or might not comply, or driving “suggestions” (or directives) might arise via V2X (which encompasses all of the various V2V, V2I, V2P, etc.).

To me, this means that for true AI self-driving cars of a Level 5, you still need to take into account the Human In-The-Loop. It won’t be a Human Out-of-The-Loop, at least not entirely, though there are certainly situations in which there isn’t any HITL involved.

For the Natural Language Processing (NLP) and AI interaction in self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For the emotional aspects of human and AI interaction, see: https://www.aitrends.com/selfdrivingcars/ai-emotional-intelligence-and-emotion-recognition-the-case-of-ai-self-driving-cars/

For my article about the socio-behavioral elements, see: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For deep personalization of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ai-deep-personalization-the-case-of-ai-self-driving-cars/

HGTL and Level 5 Self-Driving Cars

I’d like to bring up the other facet of HITL and HOTL, the HGTL element. I had mentioned that a human might not necessarily be in the loop and yet still have sway over an automated system, doing so in a kind of governance manner, thus the Human Governing-The-Loop.

In theory, if you, a human, do not turn-on your Level 5 AI self-driving car, it’s not going to do anything at all. Not everyone agrees with that concept. Some believe that the Level 5 will always be turned on, similar in a manner that you might have Alexa or Siri always on, waiting for an indication from the human that an action of some kind should be undertaken.

Does this mean that you could never fully turn-off your Level 5 AI self-driving car? There must be some means to get it to conk out. Perhaps you would need to reach under-the-hood and disconnect the batteries, denying any power to the self-driving car. That’s a bit extreme, it would seem.

Some have suggested that there should be a “kill switch” included inside of the AI self-driving car. One thought is that if you hit the kill switch, it disengages the AI and you now have a self-driving car with nothing able to drive it. For a Level 5, if there aren’t any driving controls physically inside the self-driving car, and if you’ve turned-off the AI such that the self-driving car won’t respond to your voice commands, it would seem like you have quite a hefty paperweight.

I’m bringing this up to mention that we need to be considering the HGTL facets of AI self-driving cars. It might not seem important right now, due to the aspect that the auto makers and tech firms are mainly trying to get an AI self-driving car that can drive reasonably safely via the AI, but it is a matter that we’ll ultimately need to wrestle with.

Conclusion

AI systems tend to aim toward getting Humans Out-of-The-Loop, doing so by leveraging AI capabilities that mimic or attempt to perform in the way that humans do. We cannot rush that direction and end-up falsely believing that an AI system can indeed perform without a HITL when it perhaps cannot realistically do so.

At the same time, if there is a HITL that is being devised, the AI needs to be built in a manner to appropriately interact and co-share with the human. Less surprises are a handy mantra. The same mantra applies to those hidden instances of HOTL.

Besides the classic HITL and HOTL, a slightly more macroscopic viewpoint includes the HGTL.

Even if a human is not directly involved in the automated system and the performance of the scoped tasks, there is likely a governing role that a human can potentially undertake. Whether this is governing possibility a HITL or not, the HGTL is nonetheless a reminder of identifying what to do about humans that are seemingly not in the loop and nor per se outside of the loop (depending upon the definition of the loop), and yet can nonetheless impact the loop.

There are all kinds of loops, including lopsided ones, reinforcing ones, and loops that either rely upon humans or do not do so. AI systems are going to bring to the forefront the human role inside and outside of loops, doing so in ways that were not as feasible with prior automation. That’s my feedback loop to those making AI self-driving cars.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.