Autonomic Nervous System and AI Real-Time Systems: As Applied to Autonomous Cars

1744

By Lance Eliot, the AI Trends Insider

The autonomic nervous system of humans is sometimes also referred to as the vegetative nervous system, or the visceral nervous system, or even the sympathetic nervous system. We all know that whatever you call it, the purpose seems to be as a regulatory function of the human body and one that generally occurs automatically. In that sense, it happens seemingly unconsciously.

Your brain doesn’t apparently need to do much toward making sure that your heart is pumping blood. Cardiac regulation is mainly the role of the autonomic nervous system. Respiratory functioning is also usually being handled by the autonomic nervous system. Overall, the autonomic nervous system (ANS) is often considered the true core of our fight-or-flight response mechanism. It offers the handling of our key physiological responses. Most of the time it is happening in an involuntary fashion. It just happens and we live to tell the tale.

The other day, while studying some AI code that had been jointly written in Python and TensorFlow with a colleague of mine, he sneezed. This wouldn’t be particularly noteworthy except for the fact that he sneezed again, and again, and again. He somehow got himself into a sneezing fit. Was he allergic to Python? Maybe to TensorFlow? Maybe to AI? All kidding aside, it was quite a moment that caught both him and I unawares and he seemed to just keep sneezing. The first sneezes were almost humorous, but when he kept going it became more somber and we both wondered if somehow there was something amiss. Fortunately, it subsided, disappearing almost as mysteriously as it had originally appeared.

It might have been that his autonomic nervous system was reacting to an environmental condition, perhaps some allergen in the air or maybe the coffee he was drinking had some chemical that sparked the sneezing. Thus, one theory would be that it was entirely due to an involuntary act and a foundational human response that he had no conscious role in instituting and nor controlling.

Or, it could be that he was mentally reacting to our conversation and perhaps his mind was jogging his sneezing mechanism as a type of reaction or communication mechanism. In the old debate of mind over matter, we’ll likely never know whether it was a straightforward autonomic response or whether it was possibly a mentally generated response.

Dual Brains Of Sorts And How To Coordinate Them

Some liken the autonomic nervous system to being a “second brain” of the human body. It’s as though we have our normal brain that we believe does our thinking for us, and then a second kind-of-brain that is the “body controller” aka the autonomic nervous system and for which it tends to do whatever it wants to do. At times, the two brains (if you believe in this notion) are aligned and at other times they might be working seemingly separately. Your real mind might be saying don’t sneeze, and the second “mind” might be saying sneeze and sneeze some more. The real mind might not be able to do much to stop the second mind.

Now, one could suppose that the real mind could try to trick or control the second mind, in some cases and in some ways. While my colleague was sneezing, he grabbed for a glass of water and tried to drink the water. He later explained to me that he thought perhaps he could suppress or even stop the sneezing by guzzling down some water. Turns out this didn’t seem to make any difference to the matter and he kept sneezing. But, it is perhaps illustrative that his real brain had mentally come up with a quick plan of drinking water, doing so in hopes of overcoming the “second brain” of the sneezing fit being fueled by presumably his autonomic nervous system.

We might consider Robert Louis Stevenson’s famous Dr. Jekyll and Mr. Hyde as the same kind of duality of having two brains in our body. The real brain provides our presumably conscious kinds of thinking, while the “second brain” of the autonomic nervous system deals with our visceral, vegetative kinds of responses. Of course, Henry Jekyll was meant to represent good and Edward Hyde was representing evil, which in the case of the human body and its “two brains” as alluded to herein we aren’t saying one is good versus the other being evil. They both just are.

They each perform their respective function. At times they might be well aligned, and at other times they might be misaligned, including perhaps wanting the opposite of each other. For whatever reason, the autonomic nervous system might be saying sneeze, darn you, sneeze, while the real brain is saying let’s put an end to this disruptive and seemingly useless and unfitting sneezing fit.

AI Self-Driving Cars As Case Of Dual Brains Element

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute we are developing AI systems for self-driving cars. One of the crucial open questions right now involves the role of so-called autonomic subsystems versus AI-led subsystems of an AI self-driving car.

Allow me to elaborate.

Many conventional cars today have an automatic braking system that is included by the car manufacturer. Often it is an added optional feature for your car. If you opt to pay for it, the feature is added to or included into your car. Don’t confuse this with the anti-lock braking system (ABS), which has been around for many years and comes pretty much as standard on most cars today. Instead, I’m referring to what is often called the Advanced Emergency Braking (AEB) system or at times referred to as the Automatic Emergency Braking or the Autonomous Emergency Braking. Either way you phrase it, let’s anoint it as the AEB herein.

The National Highway Traffic Safety Administration (NHTSA) pushed for the full adoption of AEB in the United States and by-and-large the auto makers agreed to do so by the year 2022. The notion is that it is an autonomic subsystem of your car, meaning that it acts independently of the human driver and tries to figure out when to apply the brakes, doing so only when it is considered an “emergency” situation and also doing so in the somewhat blind hope that applying the brakes is the right thing to do in the circumstance.

Notice that the way in which I’ve phrased the nature of the AEB is by carefully pointing out that it is intended to do more good than harm, though you need to keep in mind that it is a relatively simplistic subfunction and might be right or wrong when it opts to engage. If it detects an object in front of your car, and if your car seemingly is going to hit that object, the AEB deduces mathematically that it should go ahead and apply the brakes on your behalf as the human driver.

You generally don’t have much say about this. The AEB generally does its detection and calculations in split seconds and then activates the brakes. You might liken this to the human autonomic nervous system and consider it to be a reflexive or autonomic act of the car. In terms of the example earlier about sneezing, the autonomic nervous system of my colleague presumably invoked his sneezing fit, and his real brain didn’t have much to do with it. In the case of the AEB, you the human driver and with your real brain, generally don’t have much sway over the activation of the AEB and the AEB will do what it needs to do.

In some cars, you as the human driver can choose to disengage or deactivate the AEB. This raises all sorts of questions though. Why would you deactivate the AEB? In theory, if your car has AEB in it, you would want it to always be on, always be ready, always be available to automatically apply the brakes in an effort to save your life and possibly the lives of others. Some auto makers are prohibiting the car owner or consumer from being able to turn-off the AEB. It is considered essential and not to be toyed with. But, there are some car owners or consumers that believe they should be able to decide for themselves whether or not to have the AEB activated. Freedom of choice is their mantra.

Suppose a conventional car that is equipped with AEB comes upon another car that is stopped in the roadway and the AEB detects the stopped object and applies the brakes to your car and manages to stop prior to hitting the other car. The AEB is the hero! The human driver for whatever reason wasn’t noticing the stopped car or froze-up and failed to hit the brakes, and so the AEB stepped in like superman and applied the brakes.

Imagine if the AEB had not been engaged. As far as we can discern, the moving car would have plowed into the stopped car, possibly causing injury or death. If you were in the car that got rear-ended, you’d want to know why the AEB had been disengaged on the car that hit you. It would seem irresponsible that the driver had deactivated the AEB, a mechanism that could have possibly saved lives and prevented injuries. This kind of circumstances is exactly why some of the auto makers make the AEB unable to be deactivated by the consumer (often, it can instead be deactivated by a trained car mechanic or by the auto maker).

Besides the ability to possibly disengage beforehand the AEB, there is also often a feature of the AEB that allows for a type of “human action” default override of the AEB. For example, some AEB subsystems will detect whether the driver of the car is accelerating, and if so the AEB will not apply the brakes, even if the AEB has calculated that there seems to be an emergency and that applying the brakes seems like the better choice. This capability of the AEB is made under the assumption that if the human driver is accelerating, perhaps the human has determined that the best course of action involves speeding up to avoid a crash. In that case, the AEB quietly demurs to the choice of the human driver.

This brings up the duality of the two brains. You have a human driver that has all the intelligence of a human and therefore we might assume that for driving of the car they know best. We have a “dumb” advanced emergency braking system that relies upon very simplistic mathematical formulas and sensors of the car to try and figure out whether to apply the brakes in what appears to be emergency situations.

Million Dollar Question About AEB

Should the AEB always go ahead and apply the brakes when it ascertains what it believes to be an emergency situation, or should we allow that a human might be more aware of what’s really happening and therefore in some situations defer to the human?

That’s the million dollar question. Under what circumstance should the AEB apply? You might say that the AEB should just ask the human driver, hey, you, should I go ahead and apply the brakes right now? But, that’s not very practical. By definition, the AEB is generally going to undertake braking at the last moment, when just split seconds are left, and the time it would require to ask a human whether to go ahead and apply the brakes would defeat the purpose of the AEB. By the time it asked and the human responded, the odds are that the car would have already smashed into whatever the AEB was trying to prevent a crash from occurring.

We’re kind of back to the predicament about whether to allow a human to deactivate the AEB. Some say that the AEB should be sacrosanct. It should always be on. It should always proceed to apply the brakes when it ascertains that applying the brakes is warranted. Period. No further discussion or debate. There are those that point out that the AEB is a simplistic function and might or might not be right in what it chooses to do. For them, the AEB should either be allowed to be deactivated by a human, beforehand, prior to an emergency, or that during an emergency the acts of the human driver should determine whether or not the AEB acts, such as if the human driver is accelerating deeply then it implies the human is overtly trying to act and the AEB should not mess with the human driver.

Indeed, this brings up another salient point. Suppose the human driver was rapidly accelerating and genuinely believed that by accelerating they might get themselves out of a jam and avoid crashing. Meanwhile, suppose the AEB was blind to whatever the human driver was doing and figured that it made no difference whatsoever about the acts of the human driver. As such, all of a sudden, the AEB applies the brakes. The human driver is confused and confounded since they are trying to do the direct opposite. If both the AEB and the human driver are at odds, one can assume that the end result will be worse, namely that the car won’t necessarily brake in time and nor will the car accelerate in time to avoid the incident. Boom. Crash.

The same issue can confront AI self-driving cars.

I know it might seem surprising to consider that the same predicament can face AI self-driving cars. You would likely be under the impression that the AI of a self-driving car would determine all the actions of the self-driving car. But, this is not necessarily the case.

Many of the auto makers and tech firms are taking “conventional cars” and adding the AI self-driving car capabilities into those cars. Thus, it is not a grassroots from the bottom-up redesign of a car, but instead the morphing of a somewhat conventional car into an AI self-driving car. As such, there are a wide variety of elements of a conventional car that are still embodied in the arising AI self-driving car.

For my article about kits to convert cars into AI self-driving cars, see: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

For my framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For those that believe we ought to start-over about cars to make AI self-driving cars, see: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

Those Two Brains Arise And Possible Conflict

In essence, once again we have two brains in a car. For a conventional car, it’s the human driver and the AEB. For a true AI self-driving car, it is the AI and the AEB. Well, actually, there are other semi-autonomous subfunctions of a conventional car that also relate to this whole notion of who’s in-charge of the driving, but the AEB is the most prominent and the focus herein. You can apply the same principles underlying the AEB matter to the other kinds of autonomous rudimentary functions on conventional cars.

Just as the human driver is the “real brain” and the AEB is the (shall we say) tiny brain, in the same manner we might ascribe that the AI of the self-driving car is the “real brain” and the AEB is the tiny brain. I want to be careful though in somehow implying or suggesting that the AI is the equivalent to a human brain of a human driver – it is not. Therefore, when I refer to the word “brain” as it relates to the AI, I am only using the word in a loose sense and not intending to suggest it is equivalent.

For an AI self-driving car, you’ve got the dilemma of having the AI that is supposed to be doing the driving, and yet also there’s an autonomic nervous system that consists of the AEB (and other subsystems). Which of them is in-charge?

You might contend that the AI should be in-charge. As such, you would presumably deactivate the AEB beforehand, prior to the AI driving the self-driving car. Thus, there’s no need for the AI to be concerned about the AEB and avert any efforts by the AEB that might seem counter to whatever the AI is trying to do while driving the car. Matter settled.

But, not so fast! We are still going to be faced with the same concerns as we did with the human driver that deactivates beforehand the AEB. Suppose there is a situation in which the AI was faced with a situation that could have been solved via the use of AEB, but because the AEB was deactivated the AI instead took some action that then led into the crash.

If this seems theoretical, allow me to point out that this very same question arose with the Uber crash in Arizona that killed a pedestrian crossing the street. The Uber self-driving car had the AEB deactivated. The AI that was driving the Uber self-driving car hit a pedestrian. If the AEB had been activated, the question remains whether or not the Uber self-driving car would have struck the pedestrian, or that if it had done so nonetheless that at least it might have been going at a slower speed due to the braking of the AEB by the time it hit the pedestrian.

See my analysis of the Uber incident: https://aitrends.com/selfdrivingcars/initial-forensic-analysis/

See my follow-on analysis of the Uber incident: https://aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For my article about the cognitive timing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

Does Deactivation Make Sense

We’ve got the issue of whether to deactivate the AEB beforehand, and also the other question about what to do if the AEB is activated and it tries to override the AI of the self-driving car. Uber formally came out after the Arizona crash and pointed out that they had purposely deactivated the AEB since they believe that the self-driving car would otherwise possibly exhibit erratic driving behavior. This explanation fits with the points herein about the difficulties of driving a car when there are “two brains” involved and not necessarily working fully aligned.

At industry conferences, when I give presentations about AI self-driving cars, I often get asked the question of why t the AI of the self-driving car couldn’t do the same thing that the AEB was doing. We all get the idea that the AEB is different from a human in that it is a piece of automation and it can takeover for a human by applying the brakes in a sudden and deep manner. But, since it is a piece of automation, and since the AI is a piece of automation, it seems odd and maybe troubling to not have them fully aligned with each other. They should presumably be one and the same.

As mentioned earlier, the AEB is typically on a conventional car and when an AI self-driving car capability is grafted onto a conventional car you then end-up with these kinds of disparities. It’s almost like a Frankenstein containing multiple body parts from different sources. It’s not easy to make sure they are all integrated together.

For my article about the Frankenstein concerns of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

For my article about the stealing of secrets about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

For my article about the designs of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

You might rightfully wonder why the AI of the self-driving car doesn’t subsume the AEB function and perform the same tasks as the AEB would. In theory, we don’t need an AEB per se if the AI embodies the same capabilities of an AEB. That’s a good point.

The AI of most self-driving cars of today is not as well optimized to perform in the same manner as an AEB.

You can liken this to humans, in the duality of our minds and the autonomic nervous system. The autonomic nervous system works very fast and offers speed as an advantage for handling certain circumstances. When you put your hand near a hot stove, your hand recoils nearly instantly at the sensation of the heat. Some would say that this is happening in an autonomic fashion. Rather than your hand relaying to the brain that there is something hot, and your brain then figuring out what to do, including possibly sending a signal to the hand telling it to move away from the heat, the autonomic nervous system just makes it so.

One possibility in the Uber incident was that the AI might have taken too long to try and ascertain what action to take. If instead the AEB had been activated, it’s conceivable that the AEB would have acted like the retrieval of a hand that’s dangling over a hot stove, namely the AEB would have slammed on the brakes, right or wrong, in an autonomic manner.

In this particular case, we can likely surmise that the AEB would have been doing the right thing, based on what is known to-date about the circumstance. But, remember that the AEB would have presumably been activated all of the time, if it were activated at the time of the incident, and so you’d have had other tussles between the AEB and the AI, which might have led to other incidents. We don’t know.

And so this takes us to the gamble that most of the auto makers and tech firms are right now taking. Should they leave the AEB activated on their emerging AI self-driving cars, or should they deactivate it. If they deactivate it, there is the later question to be asked when an incident occurs as to whether or not they were right to have deactivated the AEB. If they don’t deactivate the AEB, it could turn out that there are situations where the AI and the AEB fight each other and the result is an incident that might otherwise have been avoided.

Darned if you do, darned if you don’t.

Integrating The Dual Brains Is Preferred

That being said, there are AI developers that also say that we need to better integrate the AI and the AEB. We need to design the AI to take into account the AEB. This might also mean that the AEB should be redesigned, doing so in light of the advent of AI self-driving cars. The notion is that the AI and the AEB are woven together, integrated as it were, rather than two different capabilities that happen to be on the same self-driving car. That makes sense and it’s the path we’re pursuing.

You might have seen in 2015 a viral video that showed a Volvo that was being demonstrated as to its AEB capability. The short clip of just thirty seconds or so became a worldwide sensation because it showed a Volvo being driven forward and a human stood in front of the car, anticipating that the AEB would hit the brakes prior to the Volvo hitting the human. Instead, the Volvo hits the human. Some wisecracks posted with the video included that the feature should be renamed the Auto Leg Breaker, or that it is the safest car in the world but only if you are sitting inside of the car.

It turns out that after the video rose to attention, it was discovered that the particular Volvo shown in the video did not have the AEB pedestrian feature in it. People that were criticizing Volvo, did so falsely, and assumed that the feature was in the car and that the feature was engaged. That’s part of the fake news in the AI self-driving car realm and something I’ve warned about many times.

For more about fake news about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/

Conclusion

The point to that story is that when you get into a car, you often don’t really know what it consists of.

Likewise, when standing outside of a car, you don’t know for sure what’s under-the-hood. Right now, we have a brewing and bubbling issue of the AI versus the AEB in terms of AI self-driving cars. I can predict that we’ll have another incident involving an AI self-driving car that also had AEB, and for which the question of whether using the AEB would have been a life saver will arise. We’ve got to get the autonomic nervous system and the AI to be better integrated, soon.

And that’s nothing to sneeze at, I assure you.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.