AI Empathetic Computing: The Case of AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

A friend of mine in college was known for being very stoic. You could tell him that you had broken your leg skiing and he’d show no emotion. He’d just sit there and stare at you. No words came forth. No expression on his face. You might tell him that your dog got run over, and he’d continue to be without any kind of emotional response. I believe that if you told him that his dog got run over, he’d have the same kind of non-reaction, though I suppose he might be curious enough to ask how it happened.

Some of us thought that he had watched way too many Star Trek TV shows and movies. He had become our version of Mr. Spock, the fictional character that generally showed little or no emotion.

In case, you’ve been living in a cave and aren’t familiar with Star Trek, Spock was the science officer and first officer. To some degree, it was implied that his linage of Vulcan heritage allowed him through training and DNA to remain impartial and detached, shedding any emotion, though this was not entirely the case and he had mixed-blood with a human mother that “did him in” in terms of having to fight back at emotions bursting forth. At times, in some of the stories, he did show emotion, typically briefly and with a muted indication of it.

I’d like to remind us all that Spock was a fictional character in a TV show and not an actual person. We tried to emphasize this crucial aspect to our friend. Our friend seemed to believe that Spock was real or that even if not so, somehow it was possible to be like Spock. I knew my friend’s parents and I assure they were not Vulcan, neither of them were. He therefore was already one step behind being so unemotional, presumably because wasn’t already cooked into his DNA, as Spock’s was.

Our friend eventually had a girlfriend. We assumed that he’d come out of his non-emotion impenetrable barrier bubble and certainly be at least emotional with regard to his girlfriend. No dice. At first, we assumed he was keeping up the pretense only with us, his male friends (his buddies), and undoubtedly, he was emotional when behind-the-scenes with his girlfriend. A macho kind of thing of hiding his emotions to the guys. Whenever he insisted that he was acting toward us in the same manner as he acted toward his girlfriend, we simply nodded our heads as though we agreed to this obviously preposterous claim.

Turns out that his girlfriend confided in me that he was indeed a cold calculating machine and seemed to not express any emotions. He was this way all the time, according to her reports. For example, they had gone one time to a great sorority party and she was having a wonderful time, meanwhile he barely smiled and acted nonplused. They had gone hiking in the mountains and nearly fell from a cliff, yet he remained unnerved and cool as a cucumber. She assumed that eventually he’d come “out of his shell” if she just kept dating him (I believe it almost became an attractor as a type of challenge!).

Maybe he really was an early version of Mr. Spock? Note that the original Star Trek series took place probably around the year 2200 or so, and perhaps my friend became the basis for the future Mr. Spock. It’s a time travel deal.

Anyway, I’d wager that most of us do express our emotions. Furthermore, we express our emotions at times as a response to someone else. The other person might tell us something in an unemotional way, and you might respond in an emotional way. Or, the other person might tell you something in an emotional way, and you might respond in an emotional way.

Emotions Spark Emotions, Or So We Expect

Thus, it can be that emotion begets emotion, stoking it from another person. That doesn’t have to be the case and you can be conversing with someone on a seemingly unemotional basis and then opt to suddenly become emotional. There doesn’t necessarily need to be a trigger by the other person. Nor does it necessarily need to be a tit-for-tat.

That being said, usually when a person is emotional toward you, the odds are they will likely be expecting an emotional laden response in return. When my friend was told about a mutual close friend that had broken their leg skiing and told so by someone that was crying and quite upset about the pain and suffering involved, it would likely be anticipated that the response would be one of great concern, sadness, and a flurry of aligned emotional evocations from him.

A lack of an emotional response in the leg broken instance would tend to signal that he didn’t care about the other person. He didn’t care that the other person had suffered an injury. What kind of a friend is that? How could he be so careless and without sympathy?

When you asked him about these kinds of matters, he would contend that by remaining unemotional, it gave him an added edge in life. He kept his head calm and collected. It would do little good for him to get cloudy and hazed by being emotional. For the friend that had broken a leg, the main logical aspect would be whether there is anything he could do to aid that person. Expressing emotion about it was wasted energy and effort and distracted by considering the logic of the matter.

Sure, that’s what Mr. Spock would say. Watch any episode.

You might be familiar with the words of the famous holistic theorist Alfred Adler, a psychiatrist and philosopher that lived in the late 1800s and the early 1900s, in which he said that we should see with the eyes of another, hear with the ears of another, and feel with the heart of another.

The first two elements, the eyes and the ears, presumably can be done without any emotional attachment involved, if you consider the eyes as merely a collector of visual images and the ears as collectors of abstract sounds and noises. The third element, involving the heart, and the accompanying aspects of feelings, pushes us squarely into the realm of emotions.

Of course, I don’t believe that Adler was suggesting that the eyes and ears are devoid of emotion, and rather the opposite that you can best gain a sense of another person by experiencing the emotion that they express and inures by what they see, and by what they hear, along with matters of the heart.

I bring up Adler’s quote because there are many that assert you cannot really understand and be aligned with another person if you don’t walk in their emotional shoes.

You don’t necessarily need to exhibit the same exact emotions, but you ought to at least have some emotions that come forth and be able to understand and comprehend their emotions. If the other person is crying in despair, it does not mean you can only respond by crying in despair too. Instead, perhaps you break out into a wild laughter and this might spark the other person out of their despair and join you in the laugher. It’s not a simple mating of one emotion echoed by the same emotion in the other.

Wearing Emotional Shoes, The Empath

Let’s then postulate a simple model about emotion.

One aspect is the ability to detect emotion of others.

The other aspect is for you to emit emotion.

So, you are talking with someone, and you detect their emotion, and you might then respond with emotion. As mentioned before, it is not necessarily the case that you would always do the detection and a corresponding emission of emotion. It is more complex than that.

For example, we all wondered whether my friend was perhaps detecting emotion and then storing up his own emotion. If that was the case, we wondered what would happen one day if suddenly all of that pent-up emotion was unleashed, all at once. A cavalcade of emotion might emerge. A tsunami of emotion. A bursting dam of emotion.

Being empathetic is considered a capability of being able to exhibit a high degree of understanding about other people’s emotions, both their exhibited and hidden emotions. Per Adler, this implies that you need to be like a sponge and soak in the other person’s emotions. Only once you’ve gotten immersed in those emotions, only then can you truly be empathetic or an empath, some would say.

Can you be empathetic without also exhibiting emotion? In other words, can you do a tremendous job of detecting the emotion of others, and yet be like my friend in terms of never emitting emotions yourself?

That’s an age-old question and takes us down a bit of a rabbit hole. Some claim that if you don’t emit emotion, you can never prove that you felt the emotion of another, and nor can you then get on the same plain or mental emotional level as the other. I assure you my friend would say that’s hogwash and he separated (or thought he did) the ability of emotion recognition versus the personal embodiment of emotion.

One danger that some suggest can occur if you are emitting emotion is that you might get caught up in an emotion contagion. That’s when you detect the emotion of another and in an almost autonomic way you immediately exhibit that same emotion. You can see this sometimes in action. Suppose you have a room of close friends and one suddenly starts crying, others can also start to cry, even though maybe they don’t exactly know why the other people are crying. It becomes an infectious emotion. Crying can be like that. Laughing can be like that.

I recall a joke that was told one time while I was on a hike with the Boy Scouts (I was an Assistant Scout Master at the time). We had been hiking for miles upon miles. The day was long. We were exhausted and looking forward to reaching camp. One of the younger Scouts told a joke about a turtle and a hare, for which I don’t remember the details as it was utterly without any sense and a completely jumbled-up joke. Though at first, I was trying to figure out the nature of the joke, and hoped that I could “repair” the joke into whatever it was supposed to be, suddenly an older Scout nearby started laughing.

Then, another Scout started laughing. Then another. And so on. We were stretched out on this hike over a distance of maybe a football field size line, each Scout trudging along and following the footsteps of the Scout ahead of them. Within moments, every single Scout and all of the adult Scout leaders were all laughing. It was an amazing sight to see.

Later on, at the evening campfire, I asked the other adult Scout leaders if they could make sense of the botched joke. I had assumed that they had heard the joke and either already knew what the young Scout was attempting to say, or found it funny because it was perhaps an entirely nonsensical joke. Well, none of them had heard the actual joke. They were too far away. They had laughed because everyone else was laughing, and partially I’d guess due to the exhaustion of the hike. It was an infectious spread of laughter.

Sometimes when you exhibit emotion it can come across as a form of pity. This might not be what you intended. I knew an adult volunteer that aided us with the Scouts and every time a Scout said they had been either physically hurt during a hike or even mentally anguished, this adult responded with laughter. It was kind of weird at first. The reaction by the Scout telling about their hardship was to recoil from this response. It seemed like the adult was mocking the Scout or maybe trying to show a sense of feeling sorry for them, but it didn’t come across very well.

There is ongoing research trying to figure out how the brain incorporates emotions. Can we somehow separate out a portion of the brain that is solely about emotions and parse it away from the logical side of the brain? Or, are emotions and logic interwoven in the neurons and neuronal connections such that they are not separable. In spite of Adler’s indication about the heart, modern day science would say the physical heart has nothing to do with emotions and it’s all in your head. The brain and its currently unknown manner of how it exactly functions is nonetheless the engine that manifests emotion for us.

Sometimes empathy is coupled with the word affective. This is usually done to clarify that the type of empathy has to do with emotions, since presumably you could have other kinds of empathy. For example, some assert that cognitive empathy is being able to detect another person’s mental state, which might or might not be infused with emotion. Herein, I’m going to refer to empathy as affective empathy, which I am intending to suggest is emotional empathy, namely empathy shaped around emotions.

I’ve previously written and spoken about emotion recognition in the context of computers that are programmed to be able to detect the emotion of humans. This is a budding area of Artificial Intelligence (AI). I’m going to augment my prior discussions about emotion recognition by now including the emitting of emotions.

For my article about emotion recognition and AI, see:

Emotion Emissions Is The Focus Here

Recall that I earlier herein had said that we should consider the emotional empathy or now I’ll say affective empathy as consisting of two distinct constructs, the act of emotion recognition, and the act of emotion emission.

I want to mainly explore the emotion emission aspects herein. The notion is that we might want to build AI that can recognize emotion, along with being able to exhibit emotion. That’s right, I’m suggesting that the AI would emit emotion.

This seems contrary to what we consider AI to be. Most people would assert that AI is supposed to be like Mr. Spock, or more properly another fictional character in the Star Trek series known as Data. Data was a robot of a futuristic nature that was continually trying to grasp what human emotions are all about and craved that someday “it” would have emotions too.

There might be some handy reasons to have the AI exhibit emotion, which I’ll be covering shortly. First, let’s do a quick look at what do we mean by the notion of emotions.

When referring to emotions, there are lots of varied definitions of what kinds of emotions exist. Some try to say that similar to how colors have a base set and you can then mix-and-match those base colors to render additional colors, so the same applies to emotions. They assert that there are some fundamental emotions and we then mix-and-match those to get other emotions. But, there is much disagreement about what are the core or fundamental emotions and it’s generally an unsettled debate.

One viewpoint has been that there are six core emotions:

  •         Anger
  •         Disgust
  •         Fear
  •         Happiness
  •         Sadness
  •         Surprise

I’m guessing that if you closely consider those six, you’ll maybe right away start to question how those six are the core. Aren’t there other emotions that could also be considered core? How would those six be combined to make all of the other seemingly emotions that we have? And so on. This highlights my point about there being quite a debate on this matter.

Some claim that these emotions are also to be considered core:

  •         Amusement
  •         Awe
  •         Contentment
  •         Desire
  •         Embarrassment
  •         Pain
  •         Relief
  •         Sympathy

Some further claim these are also considered core:

  •         Boredom
  •         Confusion
  •         Interest
  •         Pride
  •         Shame
  •         Contempt
  •         Interest
  •         Relief
  •         Triumph

For purposes herein, we’ll go ahead and assume that any of those aforementioned emotions are fair game as emotional states. There’s no need to belabor the point just now.

Affective empathetic computing or also known as affective empathetic AI is the aspect of trying to get a machine to recognize emotions in others, which has been the mainstay so far, and we ought to also add that it includes the emission of emotions by the machine.

That last addition is a bit controversial.

The first part, recognizing the emotions of others, seems to have a clear-cut use case. If the AI can figure out that you are crying, for example, it might be able to adjust whatever interaction you are having with the AI to take into account that you are indeed crying.

Suppose you are crying hysterically. This likely implies that no matter what the AI system might be saying to you, some or maybe even none of what you are being told might register with you. You could be so emotionally overwhelmed that you aren’t making any sense of what the AI is telling you. I’m sure you’ve seen people that get themselves caught up in a crying fit, and it often is impossible to try and ferret out why, and nor get them into a useful conversation.

I remember one young Scout that came running up to me and he was crying uncontrollably. I was worried that he was physically hurt in some non-apparent manner (I looked of course to see whether he was bleeding or maybe had a wound or had any other obvious signs of something broken). I asked him what was wrong. He kept crying. I urged him to use his words. He kept crying. I told him that I had no idea why he was crying and that for me to help him, I needed him to either point at what was wrong or show me what was wrong or tell me what was wrong. Something, anything, more so than crying.

He kept crying. This now was getting me distressed since he was essentially incommunicado. The crying was rather worrisome. Uncontrollably crying could mean that he might be entering into shock. I got down on one knee, looked him straight in the eye, reached out and held him with my arms, and in a soothing and direct voice, I asked him to tell me his name. He blurted out his name. We were now getting somewhere. Anyway, the end of the story was that he had seen another Scout get cut by a pocket knife and there had been blood, and it had spooked him to no end. Everyone it turns out was okay, after the dust settled on the matter.

The point of the story is that the Scout was so consumed by emotion that no matter what I was saying seemed to register with him.

That’s why it would be handy for AI to be able to recognize emotion in humans. Doing so would allow the AI to be able to adjust whatever actions or efforts the AI is doing, based on the perceived emotional state of the human. Maybe the AI would be better off not trying to offer logical explanation to someone hysterically crying and wait until the crying subsides. Or, maybe take another tact, such as my example of asking the person’s name, shifting attention away from whatever the matter is at hand, and instead helping the person onto more familiar and less emotional ground.

Empathetic Emotion And AI Self-Driving Cars

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. The use of emotional recognition for AI self-driving cars is an emerging area of interest and will likely be crucial for interactions between the AI and human drivers and passengers (and others). I would also assert that affective empathetic AI or computing involving emotional emissions is vital too.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of affective empathetic computing or AI, I’m going to primarily focus on emotions emissions and less so on emotional recognition herein.

Let’s assume that we’ve been able to get an AI system to do a pretty good job of detecting emotions of others. This is not so easy, and I don’t want to imply it is. Nonetheless, I’d bet it is something that we’ll gradually be able to do a better and better job of having the AI do.

Should the AI also exhibit emotion?

As already mentioned, some believe that the AI should be like Mr. Spock or Data and never exhibit emotion. Like they say, it should be just the facts, and only the facts, all of the time.

One good reason to not have the AI showcase emotion is because “it doesn’t mean it.” Some would argue that it is a false front to have AI seem to cry, or laugh, or get angry, and so on. There is no there, there, in the sense that it’s not as though the AI is indeed actually happy or sad. The emission of emotions would be no different than the AI emitting the numbers 1, 2, and 3. It is simply programmed in a manner to exhibit what we humans consider to be emotions.

Emotions emission would be a con. It would be a scam.

Besides the criticism that the AI doesn’t mean it, there is also the concern that it implies to the person receiving the emotion emission that the AI does mean it. This falsely adds to the anthropomorphizing of the AI. If a person begins to believe that the AI is “real” in terms of having human-like characteristics, the person might ascribe abilities to the AI that it doesn’t have. This could get the person into a dire state since they are making assumptions that could backfire.

Suppose a human is a passenger in a true Level 5 AI self-driving car. The person is giving commands to the AI system as to where the person wants to be driven. Rather than simplistic one-word commands, let’s assume the AI is using a more fluent and fluid Natural Language Processing (NLP) capability. This allows some dialogue with the human occupant, akin to what a Siri or Alexa might do, though we soon will have much greater NLP than the stuff we experience today.

The person says that they’ve had a rough day. Troubles at work. Troubles at home. Troubles everywhere. In terms of where to drive, the person tells the AI that it might as well drive him to the pier and drive off the edge of it.

What should the AI do?

If this was a ridesharing service and the driver was a human, what would the human driver do?

I doubt that the human driver would dutifully start the engine and drive to the end of the pier. Presumably, the human driver would at least ignore the suggestion or request. Better still, there might be some affective empathy expressed. The driver, sensing the distraught emotional state of the passenger, might offer a shoulder to cry on (not literally!), and engage in a dialogue about how bad the person’s day is and whether there is someplace to drive the person that might cheer them up.

It’s conceivable that the human driver might try to lighten the mood. Maybe the human driver tells the passenger that life is worth living for. He might tell the passenger that in his own life, he’d had some really down periods, and in fact his parents just recently passed away. The driver and the passenger now commiserate together. The passenger begins to tear up. The driver begins to tear up. They share a moment of togetherness, both of them reflecting on the unfairness of life.

Is that what the AI should do?

I realize you can quibble with my story about the human driver and point out that there are a myriad of ways in which the human driver might respond to the passenger. I admit that, but I’d also like to point out that my scenario is pretty realistic. I know this because I had a ridesharing driver tell me a similar story the other day about the passenger that had just been in his car, before I got into his car. I believe the story he told me to be true and it certainly seems reasonably realistic.

Back to my question, would we want the AI to do the same thing that the human driver did? This would consist of the AI attempting to be affectively empathetic and besides detecting the state of emotion of the passenger, also emitting emotion as paired up for the situation. In this case, the AI would presumably “cry” or do the equivalent of whatever we’ve setup the AI to showcase, creating that moment of bonding that the human driver had done with the distraught passenger.

As an aside, if you are wondering how would the AI of a self-driving car do the equivalent of “crying,” which it is not going to be a robotic head and body sitting in the driver’s seat (quite unlikely) and nor have liquid tear ducts embedded into the robotic head, the easy answer is that we might have a screen displaying a cartoonish mouth and eyes, shown on an LED display inside the AI self-driving car. The crying could consist of the cartoonish face having animated tear drops that go down the face.

You might debate whether that is the same as a human driver that has tears, and maybe it isn’t in the sense that the passenger might not be heart struck by the animated crying, but there is ongoing research that suggests that people do indeed react emotionally to such simple animated renderings.

The overarching theme is that the AI is emitting emotions.

For more about AI and human conversations and AI self-driving cars, see my article:

For voice NLP and AI self-driving cars, see my article:

For key safety aspects, see my article:

For my article about key trends, see:

Range Of Emotions Shown

I’ve used this example of crying, but we could have the AI appear to be laughing, or appear to be angry, or appear to have any of a number of emotions. I’m sure too that with added research, we’ll be able to get better and better at how to “best” display these emotions, attempting to get as realistic a response as feasible.

Some people would say this is outrageous and a complete distortion of human emotions. It undercuts the truthfulness of emotions, they would say. I don’t want to burst that bubble, but I would like to point out that actors do this same thing every day. Aren’t they “artificially” creating emotions to try and get us to respond? Seems to me that’s part of their normal job description.

Does an actor up on the big screen that is crying during a tender scene in the movie have to be actually experiencing that emotion and doing so as a real element of life? Or, can they be putting on the emotion as a pretend? I ask you how you would even know the difference. A really good actor can look utterly sincere in their crying or laughing or anger, and you would assume they must be “experiencing” it, and yet when you ask them how they did it, they might say that’s what they do.

Here’s something that will get your goat, if you are in the camp about the sincerity and sanctity of emotions. I nearly hesitate to tell you.

When I talked with the ridesharing driver and he told me the story of what had just happened in his car, I offered my concern on his behalf about the bad turns in his life and the recent loss of his parents. He seemed slightly taken aback. He told me that his parents had passed away years ago. What, I asked? Yep, he told me that he had said that it was recent in hopes of being more empathetic with the passenger. When I mildly questioned the ethics of that approach, he insisted that it was all true that his parents were no longer alive, and the part about the timing was inconsequential to the significance of the matter.

If we are willing to put aside for the moment the aspect that the AI doesn’t mean it when it emits emotion, and if we agree that the emitting of emotion can potentially create a greater bond with a human, and if the bonding can aid the human, would we then be okay in terms of emitting the emotions?

This certainly takes us onto ethical matters about the nature of mankind and machines. For AI self-driving cars in particular, are we willing as a society to have the AI “pretend” to get emotional, assuming that it is being done for the betterment of mankind. Of course, there is going to be quite a debate about how we’ll be able to judge that the AI emotions emissions are indeed for the betterment of humans.

Let’s pretend that the AI did the same thing as the human driver and appeared to cry a tear with the passenger. Suppose this becomes a man-machine bonding moment. The passenger has found a friend. Maybe the AI then prods the passenger to consider driving to a bar that’s about a half hour drive away and suggests that the passenger would likely get into a happier mood at the bar. What a great and friendly suggestion. Nice!

Meanwhile, suppose unbeknownst to the passenger, the bar has already established a deal with the ridesharing firm and paid the ridesharing firm to try and get people to go there. The ridesharing service runs ads about the bar and whenever possible attempts to get passengers to visit that particular bar. Plus, the ridesharing company makes more money for longer trips, and though there’s a bar just two blocks away, this bar is a hefty trip of a half hour away and will be a better money-making trip.

Ouch! Did the AI emotion emission make the passenger feel better, and if so, what about the motives for doing so, along with the rather self-serving “manipulation” of the human passenger for the gain of the ridesharing firm.

We’re going to have a difficult time trying to discern when the affective empathetic AI is for “good” versus for other purposes (I’m sure the ridesharing firm would say that it was for the good, since it was better for the passenger to go to a known bar than a randomly chosen one two blocks away!).

For the potential use of ethics review boards for AI self-driving cars, see my article:

For overall ethics issues about AI self-driving cars, see my article:

For my article about human irrationality, see:

For my article about ridesharing services, see:

Healthy For Humans Or Maybe Not

Some would say that the affective empathetic AI could be a tremendous boon to the mental health of our society. If people are going to be riding in true Level 5 AI self-driving cars and perhaps doing a lot more traveling via cars because of the AI advances, this means that us humans will have lots of dedicated time with our AI of our AI self-driving cars.

Right now, I commute to work each morning and afternoon, spending around three to maybe four hours a day in my car. I watch the traffic around me. I listen to the news on the radio. I make some phone calls. I while away the time by blending my driving efforts with doing things that hopefully don’t distract from the driving, and yet help overcome the tedium of the driving. Plus, these other activities make me additionally productive in those otherwise mundane several hours, or at least enrich me beyond just driving my car.

When I commute to work in a true Level 5 AI self-driving car, I will then have those three to four hours for whatever purpose I’d like to use them. I am not driving the car. The AI is driving the car. I might take a snooze and sleep in the self-driving car as it is whisking me to work or from work. I might watch videos that are streamed into my self-driving car. And so on.

Suppose that the AI of my self-driving car opted to try and interact with me, doing so beyond the sole purpose of getting an indication of where I wanted to have the AI drive the self-driving car. Using its emotion recognition, it detects whether I’m doing okay and headed to work in a happy mood or not. Maybe on this day I seem to be upset and concerned. What’s going on, the AI asks me?

I mention that I was playing poker at a friend’s house last night and lost $500 at the table. I was going to use that money for other purposes. Darn it, I should not have kept betting on the game. The AI interprets this and responds with a variation of Alfred Lord Tennyson’s famous quote, it is better to have played and lost than to never have played at all. The AI then offers a short chortle of laugher. It gets me into a good mood and I laugh too.

Over time, the AI is collecting my emotional states. These aspects are routinely being uploaded to the cloud, via the Over-The-Air (OTA) electronic capability of the self-driving car and with a connection to the auto maker or tech firm that made the system.

Turns out that I nearly always play poker on Monday nights and I seem to nearly always lose, and on Tuesday mornings I’m usually in a bad mood. The AI gradually catches onto this pattern, using a variant of Machine Learning and Deep Learning in analyzing the collected data of the interactions with me while I am in the AI self-driving car. This allows the AI to greet me on Tuesday mornings by personalizing the greeting, mentioning that hopefully I came out ahead at the table last night.

The AI of your self-driving car could eventually “know” you better than other humans might know you, in the sense that with the vast amount of time you are spending inside the AI self-driving car, doing many journeys and more than you would as a driver, and with the AI collecting the data and interpreting it. This data includes the emotion recognition aspects and the emotion emission aspects.

Creepy? Scary? Maybe so. There is nothing about this that is beyond the expectation of where AI is heading. Notice that I am not suggesting that the AI is sentient. Nope. I am not going to get bogged down in that one. For those of you that might try to argue that the AI as I have described it would need to sentient, I don’t think so. What I have described could be done with pretty much today’s capability of AI.

For machine learning and deep learning, see my article:

For OTA, see my article:

For the singularity that some believe will occur, see my article:

For my article about the Turing Test and AI self-driving cars, see:

For my article about the non-stop use of AI self-driving cars, see:


Affective empathetic AI is a combination of emotion recognition and emotion emissions. Some say that we should skip the emotion emissions part of things. It’s bad, real bad. Others would say that if we are going to have AI systems interacting with humans, it will be important to interact in a manner that humans are most accustomed to, which includes that other beings have emotions (in this case, the AI, though I am not suggesting it is a “being” in any living manner).

I’ve not said much about how the AI is going to deliberate about emotions. The emotion recognition involves seeing a person and hearing a person, and then gauging their emotional state. Like I said about Adler, there is more to emotion detection than a merely visual images and sounds. The AI will need to interpret the images and sounds, using those in a programmed way or via some kind of Machine Learned manner to interpret them and ascertain what to next do.

Similarly, the AI needs to calculate when to best emit emotions. If it does so randomly, the human would certainly catch onto the “pretend” nature of the emotions. You could even say that if the AI offers emotion emissions of the wrong kind at the wrong time, it might enrage the human. Probably not the right way to proceed, though there are certainly circumstances wherein humans purposely desire to have someone else get enraged.

What about Adler’s indication that you need to get into the heart of the other person. That’s murky from an AI perspective. The question is whether or not the AI can skip the heart part and still come across as a seemingly emotionally astute entity that also expresses emotion.

I think that’s a pretty easy challenge, far less so than an intellect challenge of being able to exhibit intelligence (aka Turing Test). My answer is that yes, the AI will be able to convince people that it “understands” their emotion and that it appears to also experience and emit emotion.

Maybe not all of the people, and maybe not all of the time, but for a lot of the time and for a lot of the people.

I’ve altered Lincoln’s famous saying and omitted the word “fool” in terms of fooling people. Is the AI, which was developed by humans, which I mention so that you won’t believe that the AI just somehow concocted things on its own, is this human devised AI fooling people? And if so, is it wrong and should be banned? Or is it a good thing and will be a boon. Time will tell. Or maybe we should ask the affective empathetic AI and see what is says and does.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.