Emergency-Only AI and Autonomous Cars

4130
AI self-driving cars need to be able to respond to emergency situations that come up on the road. An accident scene can have a ripple effect as the car swerves. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

I had just fallen asleep in my hotel room when the fire alarm rang out.

It was a few minutes past 2 a.m. and the hotel was relatively full and mainly had been quiet at this hour as most of the hotel guests had earlier retired for the evening. The sharp twang of the fire alarm pierced throughout the hallways and walls of the hotel. I could hear the sounds of people moving around in their hotel rooms as they quickly got up to see what was going on.

Just last month I had been staying in a different hotel in a different city and the fire alarm had gone off, but it turned out to be a false alarm. The hotel was a new one that had only been open a few weeks and apparently they were still trying to iron out the bugs of the hotel systems. Somehow, the fire alarm system had gone off, right around midnight, and after a few minutes the hotel staff told everyone not to worry since it was a false alarm.

Of course, some more discerning hotel guests remained worried since they didn’t necessarily believe the staff that the fire alarm was a false one.

Maybe the staff was wrong, and if so, the consequences could be deadly.

Ultimately, there was no apparent sign of a fire, no smoke, no flames, and gradually even the most skeptical of guests went back to sleep.

I could not believe that I was once again hearing a fire alarm.

In my many years of staying at hotels while traveling for work and sometimes (rarely) for vacations, I had only experienced a few occasions of a fire alarm ringing out. Now, I had two in a short span of time. The earlier one had been a dud, a false alarm. I figured that perhaps this most recent fire alarm was also a false alarm.

But, should I base this assumption on the mere fact that I had a few weeks earlier experienced a false alarm?

The logic was not especially iron tight since these were two completely different hotels and had nothing particularly in common, other than that I had stayed at both of them.

False Alarm Or Genuine Fire Emergency

The good thing about my recent experience of a false alarm was that it had reminded me of the precautions you should undertake when staying at a hotel.

As such, I had made sure that my normal routine while staying at hotels incorporated the appropriate fire-related safeguards. One is to have your shoes close to where you can find them when you awakened at night by a fire or a fire alarm, allowing you to quickly put on the shoes for escaping from the room. Without shoes on, you might try to escape the room or run down the hallway and there could be broken items like glass or other shards that would inhibit your escape or harm you as you tried to get out.

I also kept my key personal items such as my wallet and smartphone nearby the bed and had my pants and jacket also ready in case needed. I knew too the path to the doorway of my hotel room and kept it clear of obstructions, doing so before I went to sleep for the night. I had made sure to scrutinize the hallway and knew the nearest exits and where the stairs were. Some people also prefer to stay in the lower floors of a hotel, doing so in case the firefighters are trying to get you out, which they can then more readily reach either by foot or via a fire truck ladder.

I don’t want you to think I was obsessed with being ready for an emergency. The precautions I’ve just mentioned are all easily done without any real extra or extraordinary effort involved. When I first check into my hotel room, I glance around the hallway as I am walking to the room, spotting where the exits and the stairs are. When I go to sleep at night, I make sure the hotel room door is locked and then as I walk back to the bed I then also make sure the path is unobstructed. These are facets that can be done by habit and seamlessly fit in with the effort involved in staying at a hotel.

So, what did I do about the blaring fire alarm on this most recent hotel stay?

I decided that it was worthwhile to assume it was a real fire and not a false alarm, putting my safety at the higher bet than the slovenly assumption that I could remain laying in bed and wait to find out if the alarm was true or false.

I rapidly got into my jeans and coat, put on my shoes, grabbed my wallet and smartphone from the bed stand, and went straight to the door that led into the hallway.

I touched the door to see if it was hot, another kind of precaution in case the fire is right on the other side of the door (you don’t want to open a door leading into a fire, of which the air of your room will simply help ignite further and you’ll be charred to a crisp).

Feeling no heat on the door, I slowly opened it to peek into the hallway.

Believe it or not, there was smoke in the hallway.

Thank goodness that I had opted to believe the fire alarm. I stepped into the hallway cautiously. The smoke appeared to be coming from the west end and not from the east end. I figured this implied that wherever the fire was, it might be more so on the west side rather than the east side of the hotel. I began to walk in the easterly direction.

What seemed peculiar was that there was no one else also making their way through the hallway.

I was pretty sure that there were people in the other rooms as I had heard them coming to their rooms earlier that evening (often making a bit of noise after likely visiting the hotel bar and having a few drinks there).

Were these other people still asleep?

How could they not hear the incessant clanging of the fire alarm?

The sound was blaring and loud enough to wake the dead.

I decided to bang on the doors of the rooms that I was walking past.

I would rap a door with my fist and then yell out “Fire!” to let them know that there was indeed something really happening. My guess was that others had heard the fire alarm but chosen to wait and see what might happen. With the hallway starting to fill with smoke, this seemed sufficient proof to me that a fire was somewhere. The smoke would eventually seep into the rooms. For now, the smoke was mainly floating near the ceiling of the hallway. It wasn’t thick enough yet to have filled down to the floor and try to permeate into the rooms at the door seams.

The good news of the situation turned out that no one ended-up getting hurt and the fire was confined to the laundry room of the hotel.

The fire department showed-up and put out the flames. They brought in large fans too to help clear out the smoke from the hotel. The staff did an adequate job of helping the hotel guests and moved many of them to another wing of the hotel to get away from the residual smoky smell. It was one of the few times that I’d ever been in a hotel that had a fire and for which I was directly impacted by the fire.

The hotel had smoke alarms in each of the hotel rooms, along with smoke alarms in other areas of the hotel. This is nowadays standard fare for most hotels and also personal residences that you are supposed to have fire alarm devices setup in appropriate areas. These silent guardians are there to be your watchdogs. When smoke begins to fill the air, the fire alarm detects the smoke and then starts to beep or clang to alert you.

Some of today’s fire alarms speak at you. Rather than simply making a beep sound, these newer fire alarms emit the words “Fire!” or “Get out!” or other kinds of sayings. It is thought that people might be more responsive to hearing what sounds like a human voice telling them what to do. Hearing a beeping sound might not create as strong a response.

You’ve likely at times wrestled with the fire alarm in your home.

Perhaps the fire alarm battery became low and the fire alarm started a low beeping sound to let you know. This often happens on a timed basis wherein the beep sound for the low-battery is at first every say five minutes. If you don’t change the battery, the beeping time interval gets reduced. The low-battery beep might then happen every minute, and then every 30 seconds, and so on.

In the hotels that I stay at, they usually also have a fire alarm pull. These are devices typically mounted in the hallways that allow you to grab and pull to alert that a fire is taking place. I’d bet that perhaps when you were in school, someone one time pulled the fire alarm to avoid taking a test. The prankster that pulled the fire alarm is putting everyone at risk, since people can get injured when trying to rush out as a result of hearing a fire alarm, plus it might dull their reaction times the next time there is an actual true fire alarm alert.

Some hotels have a sprinkler system that will spray water to douse a fire.

The sprinkler activation might be tied into the fire alarms so that the moment a fire alarm goes off the sprinklers are then activated. This is not usually so closely linked though because of the chances that a false fire alarm might activate the sprinklers. Once those sprinklers start going, it’s going to be more damaging to the hotel property and you’d obviously want the sprinklers to only go when you are pretty certain that a fire is really occurring. As such, there is often a separate mechanism that has to be operated to get the fire sprinklers to engage.

Emergency Systems That Save Lives

This discussion about fire alarms and fire protection illuminates some important elements about systems that are designed to help save human lives.

In particular:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it
  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human
  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm
  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action
  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort
  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans
  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved
  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system
  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

I’m invoking the use case of fire alarms as a means to convey the nature of emergency-only systems.

There are lots of emergency-only related systems that we might come in contact with in all walks of life. The fire alarm is perhaps the easiest to describe and use as an illustrative aspect to derive the underpinnings of what they do and how humans act and react to them.

Autonomous Cars And Emergency-Only AI

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One approach that some automakers and tech firms are taking toward the AI systems for self-driving cars involves designing and implementing those AI systems for emergency-only purposes.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. Level 4 is akin to Level 5 but with constraints self-imposed as to the scope of the AI driving capabilities.

For self-driving cars less than a Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus mainly herein on the true Level 4 and Level 5, but also begin by studying the ramifications related to Level 3.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of emergency-only systems, there are various approaches that automakers and tech firms are taking toward the design and development of AI for self-driving cars and one such approach involves an emergency-only AI paradigm.

As already mentioned, currently the most vaunted and desired approach consists of having the AI always be driving the car and there is no human driving involved at all, which is the intent of a true Level 4 and Level 5 This is much harder to pull off than it might seem.

I’ve repeatedly exhorted that the true Level 5 as a kind of moonshot.

It’s going to take a lot longer to get there than most people seem to think it will.

At the less than Level 4, there is a co-sharing of the driving task. We can step back for a moment and ask an intriguing question about the co-sharing of the driving task, namely, what should the split be between when the AI does the driving and when the human does the driving?

The Level 2 split of human versus AI driving is that the human tends to do the bulk of the driving and the AI tends to do relatively little of the driving task.

For the Level 3, the split tends toward having the AI do more so of the driving and the human do less of it.

Suppose we somewhat turned this split on its head, so to speak.

We might design the AI to be an emergency-only kind of mechanism.

Rather than the AI driving the self-driving car to varying increasing progressive degrees at the Level 2, Level 3, we might instead opt to have the human be the mainstay driver.

The AI would be used nearly solely for emergency-only purposes.

Emergency-Only AI Driving For Level 3

Let’s say I am driving in a Level 3 self-driving car. I would normally be expecting the AI to be the primary driver and I am there in case the AI needs me to take over.

I’ve written and spoken many times about the dangers of this co-sharing arrangement. As a human, I might become complacent and not be ready to take over the driving task when the moment arises for me to do so. Maybe I was playing a video game on my smartphone, maybe I was reading a book that’s in my lap, and other kinds of distractions might occur.

For the concerns about Level 3 AI self-driving cars, see my article: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For driving controls aspects of self-driving cars, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For safety as a crucial aspect of self-driving cars, see my article: https://aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my article about the moonshot of self-driving cars achievement, see: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Instead of having the AI do most of the driving while in a Level 3, suppose we instead said that the human is the primary driver.

The AI is relegated to being an emergency-only driver.

Here’s how that might work.

I’m driving my Level 3 car and the AI is quietly observing what is going on. The AI is using all of its sensors to continuously detect and interpret the roadway situation. The sensor fusion is occurring. The virtual world model is being updated. The AI action planning is taking place. The only thing not happening is the issuance of the car controls commands.

In a sense, the AI is for all practical purposes “driving” the car without actually taking over the driving controls. This might be likened to when I was teaching my children how to drive a car. They would sit in the driver’s seat. I had no ready means to access the driver controls. Nonetheless, in my head, I was acting as though I was driving the car. I did this to be able to comprehend what my teenage novice driver children were doing and so that I could also come to their aid when needed.

Okay, so the Level 3 car is being driven by the human and all of a sudden another car veers into the lane and threatens to crash into the Level 3 car. We now have a circumstance wherein the human driver of the Level 3 car should presumably take evasive action.

Does the human notice that the other car is veering dangerously?

Will the human take quick enough action to avoid the crash?

Suppose that the AI was able to ascertain that the veering car is going to crash with the Level 3 car.

Similar to a fire protection system such as at the hotels, the AI can potentially alert the human driver to take action (akin to a fire alarm that belts out an alarm bell).

Or, the AI might take more overt action and momentarily take over the driving controls to maneuver the car away from the danger (this would be somewhat equivalent to the fire sprinklers getting invoked in a hotel).

If the AI was devised to work in an emergency-only mode, some would assert that it relieves the pressure on the AI developers to try and devise an all-encompassing AI system that can handle any and all kinds of driving situations.

Instead, the AI developers could focus on the emergency-only kinds of situations.

This also would presumably shift attention toward the AI being a kind of hero, stepping into the driving when things are dire and saving the day.

Hey, someone might say, the other day the AI of my self-driving car kept me from hitting a dog that ran unexpectedly into the street.

Another person might say that the AI saved them from ramming into a car that had come to a sudden halt on the freeway just ahead of their car (and they sheepishly admit they had turned to look at a roadside billboard and by the time they turned their head back the halted car ahead was a surprise).

We are all already somewhat familiar with automated driving assistance systems that can do something similar.

Many cars today have a simplistic detection device that if your car is going to hit something ahead of it, the brakes are automatically applied. These tend to be extremely simplistic in how they work. It is almost a knee-jerk reaction kind of system. There’re not much “smarts” involved. You might liken these low-level automated systems as similar to the autonomic nervous system of a human, it reacts instinctively and without much direct thinking involved (when my hand is near a hot stove, presumably my instincts kick-in and I withdraw my hand, doing so without a lot of contemplative effort involved).

These behind-the-scenes automated driving assistance systems would be quietly replaced with a more sophisticated AI-based system that is more robust and paying attention to the overall driving task.

The paradigm is that the emergency-only AI is likened to having a second human driver in the car and the secondary driver is there only for emergency driving purposes.

The rest of the time, the primary driver is the human that is driving the car.

As mentioned, this might suggest that the AI then does not need to be full-bodied and does not need to be able to drive the car all of the time, and instead be focused on just being able to drive when emergency situations arise. Some would assert that this is a bit of a paradox.

If the AI is not versed enough to be able to drive at any time, how will it be able to discern when an emergency is arising that requires the AI to step into the driving task?

In other words, some would say that only until you have a fully capable driving AI that you would be risking things unduly to have the AI only be used in emergencies.

Unless you opted to say that the AI is exclusively used solely in emergencies, you are otherwise suggesting that the AI is able to monitor the driving task throughout and is ready to at any moment do the driving, but if that’s the case, why not then let the AI do the driving as the primary driver anyway.

Defining Emergency Driving Situations

This also brings up the notion of defining the nature of an emergency driving situation.

The obvious example of an emergency would be the case of a dog that has darted into the street directly in front of the car and the speed, direction, and timing of the car is such that it will mathematically intersect with the dog if some kind of driving action is not taken to immediately attempt to avoid striking the animal. But this takes us back to the kind of simpleton automated driving assistance systems that are not especially imbued with AI anyway.

If we’re going to consider using AI for emergency-only situations, presumably the kinds of emergency situations will range from rather obvious ones that a knee-jerk reactive driving system could handle and all the way up to much more subtle and harder to predict emergencies.

If the AI is going to be continuously monitoring the driving situation, we’d want it to be acting like a true secondary driver and be able to do more sophisticated kind of emergency situation detection.

You are on a mountain road that curves back-and-forth.

The slow lane has large rambling trucks in it. Your car is in the fast lane that is adjacent to the slow lane. The AI has been observing the slow lane and detected a truck up ahead that periodically has swerved into the fast lane when on a curve. The path of the car is such that in about 10 seconds the car will be passing the truck while on a curve. At this moment there is no apparent danger. But, it can be predicted with sufficient probability that in 10 seconds the likelihood is that the truck will swerve into the lane of the car as it tries to pass the truck on the curve.

Notice that in this example there is not a simple act-react cycle involved.

Most of the automated driving assist systems would only react once the car is actually passing the truck and if perchance as the passing action occurred that the truck then veered into the path of the car. Instead, in my example, the AI has anticipated a potential future emergency and will opt to take action beforehand to either prevent the danger or at least be better prepared to cope with it when (if) it occurs.

The emergency-only AI would be presumably boosted beyond the nature of a traditional automated driving assist system, and likely be augmented by the use of Machine Learning (ML).

How did the AI even realize that observing the trucks in the slow lane was worthwhile to do?

An AI driving system that has learned over time would have the “realization” that trucks often tend to swerve out of their lanes while on curving roads.

This then becomes part-and-parcel of the “awareness” that the AI will have when looking for potential emergency driving situations.

For my article about Machine Learning core aspects, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For the importance of explanation-based Machine Learning, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

True Autonomous Cars And Emergency-Only AI

Let’s now revisit my earlier comments about the nature of emergency-only systems and my illustrative examples of the fire alarm and fire protection systems.

I present to you those earlier points and then recast them into the context of AI self-driving cars:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it

Would a driving emergency-only AI system be setup for only a passive mode, meaning that the human driver would need to invoke the AI system? We might have a button that the human could press that invokes the AI emergency capability, or the human might have a “safe word” that they utter to ask the AI to step into the picture.

Downsides with this include that the human might not realize they need or even could use the AI emergency option. Or, the human might realize it but enact the AI emergency mode once it is too late to do anything to avert the incident by the AI.

We would also need to have a means of letting the human know that the AI has “accepted” the inception of going into the AI emergency option mode, otherwise the human might be unsure as to whether or not the AI got the signal and whether the AI is actually stepping into the driving.

There is also the matter of returning the driving back to the human once the emergency action by the AI has been undertaken. How would the AI be able to “know” that the human is prepared to resume driving the car? Would it ask the human driver or just assume that if the human is still at the driver controls that it is Okay to disengage by the AI?

  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human

As mentioned, a human driver might forget that the AI is standing ready to take over. Plus, when an emergency arises, the human might be so startled and mentally consumed that they lack a clear-cut mind to be able to turn over the driving to the AI.

  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm

With this approach, the AI is ready to step into the driving task and will do so whenever it deems necessary. This can be handy since the human driver might not realize an emergency is arising, or might realize it but not invoke the AI to help, or be perhaps incapacitated in some manner and wanting to invoke the AI but cannot.

Downside here is that the AI might shock or startle the human driver by summarily taking over the driving and catching the human driver off-guard. If so, the human driver might try to take some dramatic action that counters the actions of the AI.

We might also end-up with the human driver become on-edge that at any moment the AI is going to take over. This might cause the human driver to get suspicious of the AI.

It could be that the AI only alerts the human driver and lets the human driver decide what the human driver wants to do. Or, it could be that the AI grabs control of the car.

  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action

In this case, if the AI is acting as an alert, the question arises as to how best to communicate the alert. If the AI rings a bell or turns on a red light, the human driver won’t especially know what the declared emergency is about. Thus, the human driver might react to the “wrong” emergency in terms of what the human perceives versus what the AI detected.

If the AI tries to explain the nature of the emergency, this can use up precious time. When an emergency is arising, the odds are that there is little available time to try and explain what to do.

I am reminded that at one point my teenage novice driver children were about to potentially hit a bicyclist and I was tongue tied trying to explain the situation. I could just say “swerve to your right!” but this offered no explanation for why to do so. If I tried to say “there is a bicyclist to your left, watch out!” this provided some explanation and the desired action would be up to the driver. If I had said “there is a bicyclist to your left, swerve to your right!” it could be that the time taken to say the first part, depicting the situation, used up the available time to actually make the swerving action that would save the bike rider. Etc.

  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort

This approach involves the AI taking over the driving control, which as mentioned has both pluses and minuses.

  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans

For emergency-only AI driving systems, they are intended only for use when an emergency driving situation arises. This begs the question though of what is considered an emergency versus not an emergency.

Also, suppose a human believes an emergency is arising but the AI has not detected it, or maybe the AI detected it and determined that it does not believe that a genuine emergency is brewing. This brings up the usual hand-off issues that arise when doing any kind of co-sharing of the driving task.

  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved

Some AI developers seem to think that their AI driving system is going to work perfectly and do so all the time. This makes little sense. There is a good likelihood that the AI will have hidden bugs. There is a likelihood that the AI as devised will potentially make a wrong move. There is a chance that the AI hardware might glitch. And so on.

If an emergency-only AI system engages on a false positive, it will likely undermine confidence by the human driver that the AI is worthy to have engaged at all. There is also the concern that if the AI gets caught in a false negative and does not take action when needed, this too is worrisome since the human would assert that they relied upon the AI to deal with the emergency, but it failed in its duty to perform.

  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system

With the co-sharing of the driving task, there is an inherent concern that you have two drivers trying to each drive the car as they see fit.

Imagine that when my children were learning to drive if I had a second set of driving controls. The odds are that I would have kept my foot on the brake nearly all of the time and been keeping a steady grip on the steering wheel. This though would have undermined their driving effort and created confusion as to which of us was really driving the car. The same can be said of the AI emergency-only driving versus the human driving.

  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

Would we lockout the driving controls for the human whenever the AI takes over control due to a perceived emergency situation by the AI detection? This would prevent having the human driver fight with the AI in terms of what driving action to take. But the human driver is likely to have qualms about this. Suppose the AI has taken over when there wasn’t a genuine emergency.

We might assume or hope that the AI in the case of acting on a false alarm (false positive) would not get the car into harm’s way. This though is not necessarily the case.

Suppose the AI perceived that the car was potentially going to hit a bicyclist, and so the AI swerved the car to avoid the bike rider. Meanwhile, by swerving the car, another car in the next lane got unnerved and the driver in that car reacted by slamming on their brakes. Meanwhile, by slamming on their brakes, the car behind them slammed into the car that had hit its brakes. All of this being precipitated by the AI that opted to avoid hitting the bicyclist.

Imagine though that the bicyclist took a quick turn away from the car and thus there really wasn’t an emergency per se.

For my article about ghosts in AI self-driving car systems, see: https://aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For the debugging of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

For the egocentric views of some AI developers, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For the burnout of AI developers, see my article: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

Conclusion

There are going to be AI systems that are devised to work only on an emergency basis.

Astute ones will be designed to silently be detecting what is going on and be ready to step into a task when needed.

We’ll need though to make sure that humans know when and how the AI is going to take action. Those humans too will be imperfect and potentially forget that the AI is there or might even end-up fighting with the AI if the human believes that the AI is wrong to take action or otherwise has qualms about the AI.

We usually think of an emergency as a situation involving the need for an urgent intervention in order to avoid or mitigate the chances of injury to life, health, or property. There is a lot of judgment that often comes to play when declaring that a situation is an emergency. When an automated AI system tries to help out, clarity will be needed as to what constitutes an emergency and what does not.

The Hippocratic Oath states that primum non nocere, meaning first do no harm.

An emergency-only AI system for a self-driving car is going to have a duty to abide by that principle, which I assure you is going to be a high burden to bear.

The emergency-only AI approach is not as easy of a path as some might at initial glance assume, and indeed for some it might even be considered insufficient, while for others it is a step forward toward the goal of a full autonomous AI self-driving car.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]