Machine Learning Embodying Fear and AI Autonomous Cars

3381
AI self-driving cars should make use of fear within their action plans; you want the self-driving car to be “fearful” of hitting other cars.

By Lance Eliot, the AI Trends Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Fear is considered one of the fundamental elements of emotion.

It seems as though humans and pretty much all animals are prone to fear.

Fear can be based on a real situation, such as you might be standing in front of a hungry lion and so you naturally are bound to be fearful of it, or fear might be based on a perceived danger that is not necessarily directly evident, such as walking down a dark alley and being inherently suspicious that something bad might happen to you.

Typically, there is a physical response in a human or animal when experiencing fear.

You have likely been on a roller coaster and in anticipation of that big drop up ahead your heart rate goes up, you feel your body tensing, your mind might become laser focused and you can’t think of anything other than the circumstance that you are facing.

Humans have an ability to detect fear on others, including via facial expression analysis (someone’s face gets tense), the person might clench their teeth and make fists with their hands, etc. Of course, animals can also detect fear, of which I’m guessing you’ve had cases whereby a dog sensed your fear, maybe smelling your perspiration, and either took advantage of your fearful state or in some instances maybe even tried to reduce it.

Responding to fear can be as simple as the classic fight-or-flight kind of response.

If you fear something, you might decide to stand your ground and fight it. Alternatively, you might instead decide to run from whatever is causing the fear. Regrettably, sometimes while in the grip of fear we make bad choices. It could be that you should have chosen to run away from an angry bear rather than trying to confront it. Maybe trying to run away from an approaching ball of fire would have been better handled by trying to shelter in place.

There are other options beyond just fight-or-flight, including one that can be the worst of them all, freezing up.

Sometimes the fear is so overwhelming that trying to ascertain what to do is beyond our mental capacity at the moment, and thus we become frozen in fear. Though it might be possible that being frozen will work out okay in the given situation, generally some response is more likely to be successful than no response at all.

Fear Plausibility

Another twist to fear is that it can be considered plausible or implausible (some would say valid or invalid).

Last year, there was a Chinese space station that was going to fall to earth and supposedly no one could predict where it would ultimately land.

I had a colleague that told me he was fearful it could land on him.

I tried to point out that the vast majority of the globe is water and so the odds were high that it would fall into the water and not strike anyone in particular.

Even if it fell over land, I pointed out that by being inside a structure such as a building, it would seem unlikely he’d get hit and killed.

The odds that he would be outside and be struck by it were likely much less odds than say winning the multi-state lotto (I realize he’d rather win the lotto than get hit by the space station). I suggested he buy the multi-state lotto ticket, the payout was around $500 million, and that maybe he’d win the lotto and get hit by the space station at the same time (those are some amazing odds!).

Anyway, sometimes fear is in our minds, but not due to an actual fearful situation per se.

We can convince ourselves to be fearful.

In that sense, fear is definitely a dual-edged sword.

Fear provides us with a vital survival technique. When utilized poorly, it can cause us to damage ourselves as based on a false believe that something dangerous is going to happen, when let’s say there’s really no chance of it happening at all.

Some would refer to this as an unfounded fear.

A fascinating recent study examined fear and described an angle that most would not have thought of.

We all know that you are bound to be fearful of a predator.

The field mouse is fearful of the swooping hawk. The prey is fearful of the predator, and rightfully so.

This particular study pointed out that animals tend to avoid eating feces or munching on a carcass that has gone bad.

Those aren’t predators, so why fear them?

It’s because we are fearful of getting infections or disease, and seem to realize that we need to avoid circumstances that might involve getting infected by some untoward bacteria.

Nature Versus Nurture

How do animals know about this?

In the nature-versus-nurture debate, are we programmed in our DNA to avoid things that might infect us, or do we only learn over time by either watching others, or by being taught, or by getting an infection and surviving it such that we realize not to do that again?

If you see a hawk diving at you, it’s a pretty obvious aspect that maybe you should avoid letting it get you. But, seeing a juicy carcass, when you are starving, and opting to avoid eating it, because you somehow know that hours or maybe days later you might get sick, and might die, now that’s an interesting aspect of fear.

You need to connect a later-on consequence to something that at the moment seems benign.

The researchers described a landscape of fear.

Animals will avoid drinking contaminated water.

Animals will avoid eating a carcass when it seems too far gone.

Animals will even graze away from an area that had a carcass, as though realizing that whatever is bad about the carcass could be spread locally beyond just the carcass. Animals tend to flee from biting ticks or try to get the ticks off their bodies.

Within the landscape of fear, animals are able to detect infection threats. Either instinctively or in a learned manner, animals weigh the risks associated with the threats and try to achieve various levels of safety.

For any of you interested in population dynamics and ecological aspects, you’d likely find this view of predator avoidance and infection avoidance of keen fascination.

Fear Landscape And Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing an aspect of AI systems for self-driving cars that involves leveraging a landscape of fear regarding driving cars.

Allow me a moment to elaborate on this somewhat surprising approach.

As a human driver, you presumably already have a fear of hitting another car.

You likely are fearful that you might hit a pedestrian.

You probably also have a fear that other drivers are going to hit your car.

You might have a fear that your car will fail on you, such as being on the freeway and all of a sudden it conks out and you are stranded in the middle of the busy freeway in a stalled car. It is possible you have a fear that the roadway will be unusable or impassable.

The other day I drove up to the local mountains and reached a point that the paved road turned to packed dirt, which then became loose dirt, which then became muddy due to recent rains. My car almost got stuck in the middle-of-nowhere in an impassable road (I was driving just a conventional car and not an off-the-road vehicle).

All of the above fears as a human driver are plausible.

They are founded on a reasonable belief that those things could happen.

We daily harness those fears while driving our cars. Some drivers though make driving mistakes as based on a fear that is either unfounded or at least that doesn’t actually materialize.

I was in a car one day with a young driver that notably never made a left turn. He seemed to avoid to the extreme making a left turn.

Now, we all know that left turns can be dangerous, and even some of the shipping companies such as UPS are using GPS systems that try to minimize the number of left turns. But, this was left turn paranoia.

In talking with the driver, he shared with me a sad story of his family having gotten into a car crash while making a left turn, so he vowed that it would never happen again, which he figured by not making left turns would pretty much guarantee it. I did not have the heart to point out that his now heightened frequency of right turns, being done to make-up for not making left turns, might well have balanced out the risks of making a lesser number of left turns.

His fear of left turns would not have been apparent or visible unless you were observing him, as I had, while a passenger in the car.

If you had asked him about his driving approach, I doubt he would have volunteered that he won’t make left turns. An outside observer might not have noticed it either, unless you were following him like a secret agent.

Our fears then can be hidden from view.

Likewise, when I mentioned that you are fearful of getting into a car crash and fearful of your car faltering, it’s not something that you probably would have voiced if I had asked you about it.

The word “fear” in our society has various connotations, generally being less flattering to the person that embodies the fear. What, you were fearful of riding that roller coaster, you’re a chicken! Society seems to pressure us to hide our fears and tend to not admit to them.

For AI purposes, some believe that if we are to achieve true AI, and be able to make computer systems that can do what humans do, we need to replicate as much as possible whatever humans do.

If humans rely on emotions, we must then incorporate emotions into computer systems to achieve true AI.

There is a counter-argument that maybe we don’t need emotions to have intelligence, and so we can strip away some aspects of humans and yet still arrive at fully intelligent systems.

Others say that our intelligence is intertwined with our emotions and you cannot separate them out and yet still have intelligence. Having a no emotions AI system would not end-up being fully intelligent as it has lost an essential component that is wrapped inextricably into intelligence, they would assert.

Whether you stand on one side or the other of the debate about emotion and intelligence, I think we could say that fear is something that does make sense for an intelligent being to possess. If you are willing to consider fear as a form of mathematical calculation about the perceived dangers and risks, we certainly should have that same kind of capability built into our AI systems.

As such, an AI self-driving car should make use of fear.

That being said, I am not talking about the kind of “the sky is falling” kind of fear. I am referring to the notion of fear as a methodical means to try and determine risks and dangers, and seek actions to reduce those risks and try to achieve greater chances of safety.

Example Of No Fear Producing Dangers

I was in a car with a colleague that likes expensive cars and loves to drive fast (I would say recklessly, while he would say just fast).

We were on the freeway in the leftmost lane, the fast lane.

Our exit to get off the freeway was fast approaching. He gunned his engine and at the last moment darted across all of the lanes of traffic, having lined up small gaps in each lane, including darting in front of a very large truck hauling a tanker of gasoline.

Did we make it to the exit ramp? Yes.

Did we hit any cars or trucks? No.

In my mind, I was quite fearful when I realized what he was going to try and do. He said that he had no fear because he had done this action many times and he “knew” that he could pull this one off.

For an AI self-driving car, suppose it found itself in a similar situation.

You might argue with me that the self-driving car would have been better prepared and would have gradually made its way over to the exit and not needed to leap toward it. But, suppose I told you that the occupant in the self-driving car had suddenly told the self-driving car that they wanted it to make that next exit.

Thus, the self-driving car had little time to take the more gradual path to get to the exit.

You could say that the AI should have refused to make the exit.

The AI should have said that the occupant had been late in asking and so it was tough luck, and that instead the AI would route the self-driving car to the next exit and then via side streets make its way back to where that earlier exit had been.

This brings up an important aspect about AI self-driving cars, namely, what is the nature of the driving approach that we want our self-driving cars to have?

You might want the AI to do exactly what the “reckless” human driver had done, and have gone for it in terms of making a last gasp dive to the freeway exit. Why is the gradual approach better than the dive for it approach? You might assert that the gradual approach is certainly safer. By what proof do you claim this?

In fact, those that believe we will have a Utopian world of all self-driving cars, which I’ve pointed out is unlikely and that at least for many decades we will have a mix of both human driven cars and self-driving cars, but if we do have all self-driving cars then presumably the dive to the exit would be as safe as any other maneuver.

The self-driving car that wanted to dive to the exit could alert all the other self-driving cars nearby, via V2V (vehicle-to-vehicle communications), and the pathway that otherwise randomly had formed for the human driver might now become a designed path instead (based on the cooperation of the other self-driving cars).

We could end-up with extremely aggressive AI self-driving cars.

It all depends on how we program the AI and also what the AI is learning.

Machine Learning Subtly Captures Fear

Let’s consider the Machine Learning aspects of fear.

Suppose you have an AI self-driving car that is learning about driving by observing traffic situations and trying to find patterns to the driving behavior, of which then the AI will adopt those same driving behaviors.

In a traffic environment of reasonable human drivers that give proper way to other drivers and abide by legal speeds, the machine learning would find those patterns and presumably be a monkey-see monkey-do and perform driving in the same manner. We have artificial neural networks that indeed do this.

Imagine driving in the chaotic streets of New York City at rush hour. Cars cut each other off. Cars drive within inches of other cars. Cars won’t let other cars into their lanes. It’s a dog eat dog world there. Without knowing the drivers, themselves, and by only looking at the outcomes of their driving, we have a different picture of what driving is all about.

Deriving a pattern to driving behavior would be quite a contrast to a traffic environment of a more safety conscious wider-margins-for-error kinds of drivers.

Thus, a neural network or other kind of machine learning will indirectly embody “fear” as it is embodied in the driving behavior of those that the system is learning from. We are not in this case of a machine learning approach explicitly calling out fear and making it part of the AI system as a separate component, and instead it is being captured via the behavior of the driving going on that is being used to pattern after.

In one case, the fear of the drivers has led to more collegian driving outcomes, while in the other case the lesser sense of fear leads to cars that nearly hit or actually do include fender benders.

We could though be more explicit about the fear aspects.

The AI self-driving car has sensors that collect data for purposes of sensing the world around the self-driving car, and that data is then fed into the sensor fusion. The sensor fusion tries to figure out from the sensor data what is usable and what might not be, such as having a camera lens that is obscured by dirt and needing to rely instead on a radar that is able to detect that same area that the camera would. The sensor fusion then feeds into a virtual world model that depicts the existing and ongoing state of the surroundings and the self-driving car too.

Based on the virtual world model, the AI needs to derive an action plan of what to do next with the self-driving car. If the situation involves accelerating to get between cars that are to the right of the self-driving car, this is then issued as commands to the controls of the self-driving car. As is the steering command to direct the self-driving car over into the next lane. And so on.

It is within these AI action plans that we are immersing a healthy dose of fear.

You want the self-driving car to be “fearful” of hitting other cars.

You want it to be “fearful” of having other drivers hit the self-driving car.

These are part of the algorithms of deriving the action plans.

If the AI isn’t instructed or hasn’t learned to not hit other cars, it would likely come up with action plans that inevitably would be intentionally hitting other cars. Indeed, if you have ever watched a simulation that is used to train self-driving cars, you’ll see that the self-driving car action plans at first involve hitting other cars, but there is a points mechanism that helps the AI to realize that hitting other cars is not a good thing to do.

By the use of Machine Learning, we are putting an “instinctive” landscape of fear into the AI of the self-driving car, and this is augmented by an explicitly taught landscape of fear by programming the AI code accordingly.

Since we are on the topic of fear and AI self-driving cars, I should take a moment to also discuss a whole different aspect about fears and AI self-driving cars.

There are humans that are fearful of being occupants in AI self-driving cars.

I’ve discussed this at length in various forums and pointed out that though the media at times makes it seem that these are unfounded fears, I assert that people are right to have a healthy dose of fear about riding in today’s AI self-driving cars. Notice that I use the word “today’s” because I don’t want to suggest that we will always be fearful of riding in self-driving cars and instead differentiating that the existing crop of self-driving cars have yet to earn the right to have a low level of fear for occupants.

On a similar vein, some humans are fearful about having AI self-driving cars on our roadways.

This is due to a concern that the self-driving cars might hit other cars and strike pedestrians. Once again, I say these people are well justified in such a fear today. AI self-driving cars have yet to provide ample evidence to warrant our being fearless about how these self-driving cars might behave. I don’t believe this will be forever and just want to emphasize that it’s a condition of the state-of-the-art of what exists today.

Returning back to my mainstay points about including fear into AI self-driving cars, I would want any self-driving car to have a reasonable fear of human drivers.

Yes, that’s right, be fearful of human drivers. In the same manner that you or I are watching out for other human drivers, and we are leveraging our “fear” to gauge how we drive, it stands to reason that we want the AI self-driving cars to do the same. It needs to be a reasoned fear, and not an unfounded fear.

As they say, once the AI has mastered the landscape of fear, the only fear it should have, will be fear itself.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.