Cognitive Mental Disorders and AI Ramifications: The Case of AI Autonomous Cars


By Lance Eliot, the AI Trends Insider

There are an estimated 1 in 5 of adults that will experience a mental illness or mental disorder in a given year (that’s based on U.S. statistics, about 20% or around 44 million adults so impacted). Generally, those adults are able to still function sufficiently and continue to operate seemingly “normally” in society. In terms of a quite serious and life altering mental disorder or mental illness that is more debilitating, such a more substantive and deep cognitive impairment will occur to about 1 in 25 of American adults during their life time (that’s about 4% or nearly 10 million adults).

That is a lot of people.

These are rather staggering numbers when you consider the sheer magnitude of the matter and how many humans are being impacted. Not only are those individuals themselves impacted, so too are the other people around them. The odds are that there is a sizable spillover of a particular individual having a mental disorder or mental illness and it causing loved ones and even strangers to be impacted too.

There’s a well-know guide that describes various mental disorders and mental illnesses, known as the DSM (Diagnostic and Statistical Manual of Mental Disorders). I mention the DSM because I sometimes get a reaction from people that seem to think the topic of mental illness or mental disorder is merely when you don’t feel like going to work that day or maybe are in a foul mood. It’s a lot more than that.

The types of mental disorders or mental illness that I’m referring to consist of schizophrenia, dementia, bipolar disorder, PTSD (Posttraumatic Stress Disorder), anorexia nervosa, autism spectrum disorder, and so on. These are all ailments that can dramatically impact your cognitive capabilities. In some instances the illness or disorder might be relatively mild, while in other case it can be quite severe. You can also at times swing into and out of some of these disorders, appearing to have gotten over one and yet it still lingers and can resurface.

Evolutionary Psychologists Help Trace The History Of Human Minds

Evolutionary psychologists ask a fundamental and intriguing question about these mental disorders and mental illnesses, namely, why do they exist?

An evolutionary psychologist specializes in the study of how the mind has evolved over time. Similar to others that consider the role of evolution, it is interesting and useful to consider how the brain and the mind have evolved over time. We know based on Darwin’s theory of evolution that presumably humans and animals have evolved based on a notion of survival of the fittest.

For whatever traits you might have, if it gives you a leg up on survival, you will tend to procreate and pass along those traits, while others that aren’t as strong a fit to the environment will be dying off and thus not passing along those traits. It is not necessarily that the physically strongest people per se will survive, and instead how good a fit they have to the environment that they confront that dictates survival.

This aspect about fit involves not just the physical matters of your body and limbs, but also includes your mental capacities too.

Someone that might be very physically strong could be a poor fit for an environment where being cunning is a crucial element to survival. Suppose I am able to figure out how to make an igloo and can withstand harsh cold weather, while someone much physically stronger is not as clever and tries to live off the snowy landscape without any protective cove or housing. The physically stronger people are likely to die off, while the clever igloo makers won’t die off, and therefore those traits of cleverness would be passed along from generation to generation.

You can be a studier of evolution and aim at understanding how the human body and brain have physically evolved over time. Did we at an earlier time period have a body that was fatter or thinner, maybe shorter or taller, perhaps fingers with more dexterity or less dexterity. Did we have a brain that was larger or smaller, and did it have more neurons or less neurons, was it physically the same shape or different than the shape of our brains today. These are primarily physical manifestations of evolution.

What about our minds?

Did we think the same way in the past as we do today? Were we able to think faster or slower? Could we mentally conjure up the complex thoughts that we can today, such as the mental efforts needed for Einstein’s theory of relativity or were our predecessors not able to think such in-depth thoughts?

Trying to study the physical elements of human and animal evolution is somewhat straightforward due to the physical evidence of our past. You can generally find the bones of our predecessors and deduce their physical characteristics. You can look at the huts they made and other tools they crafted, providing an indication of what their physical size and condition might have been.

It is a bit more challenging to figure out how our minds have evolved. The emergence of writing and the written record provide a significant clue to our mental capacities, though some would argue that it is not an entirely revealing form of evolutionary evidence. You could also look at the kinds of structures we have built and perhaps use that to guess at how our minds were working at the time, though we would have been limited too by the resources available.

Could you have written a computer program in the 1600’s or 1700s? Well, kind of hard to do since there weren’t the computer systems that we have today or in modern times. Would the mind of those that were living in that age have been able to write the programs that we can do today? You might assume that of course they could have and argue that all they needed was a Mac or PC or maybe Python or Java to do so.

We know that the abacus seemed to exist in the time of Babylon, and so you could infer that we had a mental capacity at that time for computing of a kind. There are historians that say the Greeks had a mechanical analog device, perhaps we’ll call it a computer, known as the Antikythera mechanism. This Greek “computer” was able to enhance calendars and served to improve astronomical predictions such as the appearance of eclipses.

In any case, you might have always assumed that the thinking that we do today is the same as the thinking of earlier humans, but we don’t know that’s the case for sure. Some people say that our minds are like vessels and the vessels have always been the same, while it is just the content that differs. In modern times, we have different content than did they have available in Babylon and for the Greeks. Nonetheless, you might argue that they still had the same thinking and mental capabilities as we do today.

This might not be the case. It could be that our mental capabilities have evolved over time. Perhaps our mental processing was of a more limited nature in the past. It could be that our ability to think has gotten better and better.

One also needs to be careful to not unnecessarily try to separate out the physical aspects from the mental aspects of thinking. In other words, the size and shape of the brain, it’s physical characteristics, might have something to do with our capacity to think. As such, as the brain has physically changed over time, which is relatively easier to document and detect, so too would presumably our ability to think.

You might try to argue that no matter what the physical characteristics of the human brain are, we are still able to think the same way and come up with the same thoughts. This seems like a doubtful theory. If we take a look at what we know of ancient cave dwellers, and the nature of their physical brains, it sure seems unlikely they could have had the same kind of thinking powers that we have today.

I am dragging you through this discussion about the brain versus the mind and do so to get us to the question posed by evolutionary psychologists.

Explaining The Basis For Mental Disorders

Why do we have mental disorders or mental illnesses?

Tying this to the aspects of evolution, one might assert that if mental illnesses and mental disorders are a bad thing, which I would guess most people would agree is likely the case, shouldn’t we have mentally evolved in a manner that those mental disorders or mental illnesses would no longer exist today?

Going back to my earlier example about the igloo, let’s recast the matter into the case of those that are prone to mental disorders versus those that are not. If we had a population of people and there was a segment that tended to have mental disorders, and another segment of people that tended to not have mental disorders, over time and the gradual exorcising aspects of survival of the fittest, it would seem that we’d expect those with mental disorders to not be surviving. They should no longer be passing along their mental disorder genes. Meanwhile, those that aren’t prone to mental disorders should be surviving and passing along their “no mental disorders” genes.

Gradually, the population should no longer exhibit mental disorders, one would theorize. It’s an evolutionary psychological phenomenon, we might suppose. Yet, as I mentioned earlier, around 20% of adults will have a mental disorder in a given year, and around 4% will have a debilitating and substantive mental disorder in their lifetime. Doesn’t seem like evolution has led to the eradication of mental disorders.

One argument is that those 20% and 4% numbers are perhaps pretty good. Maybe hundreds of years ago it was more like 50% and 10%, and we’ve gradually had evolution winding down on those percentages. Perhaps we should be pleased to see that it is “only” the 20% and 4% today, and we might also then anticipate or predict that in a few more hundreds of years it will continue to winnow.

Another argument is that maybe we will always see numbers of around 20% and 4% respectively. It could be that our mental processing is going to have mental disorders, no matter what else happens. In a sense, the advent of mental disorders is a kind of rounding error. If you want to have our grandiose capabilities of thinking, you need to accept that a certain percentage of the time there are going to be mental disorders. It is the yin and yang of having mental capacities.

Yet another argument is that we still in the midst of mental evolution and we don’t really know what is yet going to happen about our mental capacities. Maybe, in some weird way, we are going to evolve toward having even much higher percentages of mental disorders. It could be that those with the mental disorders are tending toward survival, while those without mental disorders will not. In this kind of bizarro world order, the 20% and 4% is someday going to be 90% and 70% (or other overwhelming counts).

You could tag along on the rising tide of mental disorder by theorizing that if there is a rounding error of having highly tuned mental capacities, the smarter we get then maybe the more of a rounding error that appears. That’s another vote then for the potential of having more mental disorders rather than having less.

We might need to also add into this evolutionary equation our own efforts regarding mental disorders.

I’ve so far acted as though evolution just happens and there isn’t any kind of human led impact on how things might evolve. Some would argue that we humans can shape to a significant extent how we evolve. For example, there is the couch potato theory that if we aren’t going outside and exercising as much as we used to do, the human body will evolve towards those bodies that are suited for couch potato efforts, apparently playing video games and doing binge watching of online cat videos (hint: we’ll have slovenly bodies!).

There are lots of efforts afoot to try and treat mental disorders. Likewise, there are efforts underway to prevent mental disorders from arising. Could those human led efforts thusly impact the evolutionary elements of mental disorders?

Some say that mental disorders will remain in our DNA and yet will be suppressed by these human led efforts. The potential of having a mental disorder will remain underground, hidden within our minds, and the human led efforts will merely keep it from springing forth. In that sense, we’ll supposedly continue to have the same mental disorder capacities as we do now, but the numbers of those exhibiting it will shrink.

Others would say that we are going to figure out what leads to mental disorders, somewhat akin to finding the source of the Nile. Once we figure out the basis for mental disorders, we’ll be able to trigger them off (or, I suppose, on), via specialized drugs or other means. It could be a physical brain aspect that’s involved. Or, it might be a purely “thinking” aspect and that by a specialized form of meditation you can prevent mental disorders. Someone might discover a universal mantra that when said repeatedly gets the mind to veer away from mental disorder. Who knows?

You could potentially argue that we need to have mental disorders or mental illnesses, since they might be a helpful sign and we just don’t realize it is. Perhaps it is like a mental alarm clock. The mental disorder is forewarning that the mind of the person is having difficulties. The mental disorder is like showcasing a fever when your body is starting to get sick. The fever gets your attention and you then take other efforts to help fight a bodily infection.

If we are going to suppress mental disorders, it could knock down our chances of detecting when someone’s overall mind is maybe beginning to tilt. Without the early warning system of the emergence of the mental disorder, perhaps their entire mind is going to break like an egg. If you suppress a fever and don’t know that a fever exists, you aren’t able to take other measures to get the body ready for the infection or illness that’s trying to takeover the body. Same might be said about the mind.

Implications Of Mental Disorders As a Mind Sign

Does a mental disorder imply that our minds are fragile and brittle?

Some would say that it is such a sign. Others might claim that it is actually a robust kind of signal, allowing the mind to let us know when something is amiss. We just don’t know today that it is that kind of signal and nor what to do about it. Down the road, once we’ve cracked the enigma of thinking, perhaps we’ll realize that mental disorders were a means to ascertain when a mind needed tuning. We just didn’t have the wherewithal to know what the sign meant and nor the tuning forks in-hand to deal with it.

There’s also the aggregate versus individual perspective.

Perhaps as a population, as a society, we need to have some percentage of humans that have a mental disorder. This seems at first glance nonsensical. We assume that all mental disorders should be erased or removed from society.

We don’t know what society would be like if we did so. You could claim that society would be better off, and we’d no longer have members of the population that are seemingly abnormal in comparison to the rest of the mental status of the population. Maybe we need to have a certain proportion of the society that has a mental disorder or mental illness. Without it, the society perhaps becomes worse off. Our societal capacity might be undermined if we eliminated all mental disorders, some might argue.

I’d like to leave you there for the moment, regarding the matter of mental disorders as it relates to evolutionary psychology, and let you ruminate about it.

Let’s now shift our attention to Artificial Intelligence (AI).

Should AI Embody Mental Disorders

Here’s why. If you believe that mental disorders or mental illness is an essential ingredient of thinking, and if AI is hoping to create a form of automation that is the equivalent of human thinking, should AI be incorporating “mental disorders” into AI systems?

When I pose this question, there are some AI developers that immediately gag and start to upchuck their lunch or midday snacks. Say, what? Are you serious, they ask?

These AI developers are striding mightily to make their AI systems as “perfect” as possible. Their vaunted goal is flawlessness. That’s the sacred quest for nearly every AI developer and software engineer on this planet. The system they develop needs to work without errors. It isn’t easy to achieve. It is very hard to achieve. We don’t even know if it possible to have flawless AI systems.

The radical notion that the AI systems should intentionally have “mental disorders” is a kind of high treason statement. It is the antithesis of what developers are trying to do. Oh, so we can not only allow errors to accidently creep into our systems, they say, but we are now supposed to actually build into those systems an on-purpose dysfunctional aspect? It is truly a sign of the apocalypse; some AI developers would lament.

Well, not so fast with those cries of foul.

Perhaps to reach true intelligence we might need to mix both the good and the bad of human mental processing. Suppose those two are inextricably linked. You might not be able to have the good, if you don’t also have the bad.

In that case, all of these AI efforts are doomed to not actually reach true intelligence, since they are intentionally avoiding and trying to prevent the bad. Simply stated, no bad, then ultimately no true emergence of the good aspects of intelligence. You might hit a barrier above which automated AI systems will never get any higher up the intelligence spectrum.

Notice too that I’ve fallen somewhat into the trap of labelling the mental disorders or mental illnesses as “bad,” which might be an inappropriate categorization. As mentioned earlier, it could be that mental disorders or mental illnesses serve a useful and “good” purpose, but we just don’t yet realize this to be the case. By taking the simplistic route of labeling it as bad, it lulls us into wanting to disregard it, and get us to expunge it.

This seems to be an advocacy for intentional imperfection, assuming you are tossing mental disorders into the strictly “bad” classification.

Let’s pursue this logic about the potential need for “mental disorders” in AI systems. If you are interacting with an AI system that is using Natural Language Processing (NLP), you would presumably want the AI to interact with you in a completely fluent and mentally stable way. Suppose it suddenly sparked a moment of schizophrenia during the dialogue with a human. Most of us are familiar with paranoid schizophrenia, often depicted in movies and TV shows, so we’ll use that type for this example.

You are using the AI NLP to place an order for your baseball team via an online sports products catalog. After looking at various baseballs bats and interacting with the NLP about which bats might be best to order, the AI unexpectedly drops into a paranoid schizophrenia episode. Are you getting that bat to hurt someone, it asks? Maybe to come and hurt me, it queries of the human. I’d guess that you might be disturbed by this line of questioning and opt to order your baseball gear from another website that doesn’t have an AI system containing paranoia tendencies.

Okay, so that seems to showcase that maybe we don’t want AI to embody mental disorders.

I’ll though return to the earlier point that maybe we won’t be able to achieve true AI systems without there also being present the potential for mental disorders. In that case, it then becomes an added factor of making sure that the AI system is able to self-check itself and catch the mental disorder before it emerges in a manner that is unsettling or creates problems. In the baseball bat example, there might be a self-check that catches the NLP as it attempts to ask the paranoid-like questions, and stops the AI from doing so, avoiding the rather disturbing impact it might have on the interacting human.

For my article about debugging of AI systems, see:

For ghosts or bugs in AI systems, see my article:

For reverse engineering of AI systems, see my article:

For my article about the aspects of one-shot Machine Learning, see:

Mental Disorders As Highlighting AI Error Handling

I’ll try to make this even more seemingly “sensible” by going the route of error handling in AI systems.

Do you believe that your AI system is utterly error free? If you say yes, I’d like to suggest you either have a toy-sized AI system that has no real complexity, or you are delusional (mental disorder!) about what your AI system is or might do.

Hopefully, most reasonable AI developers would acknowledge that there is a chance that an error exists within their AI system. A reasonable chance and not a zero chance. It might be entirely there by accident. It might be there by some intentional act. In any case, yes, there’s a chance or probability that an error or errors exist in the AI system.

Sadly, many AI developers don’t do much toward trying to catch errors. They focus most of their attention on trying to debug their systems for errors, and once they’ve finished the debugging, they release the AI system and hope that there aren’t errors as yet unfound. They tend to not build into the executing system itself much in the way of being able to catch errors as they arise at run time.

In theory, there should be a robust error detecting capability of any well-built and well-engineered AI system.

This is especially needed for AI systems that might involve serious consequences due to any hidden errors that might be encountered. An AI robotic arm in a manufacturing plant might go awry due to a hidden error or bug, and could potentially harm humans that are nearby, or cause destruction to the facilities of the manufacturing plant.

So, here’s where I am taking you. If we can agree that an AI system ought to have some definitive and robust error detection capabilities, we might dovetail into this notion and say that if “mental disorders” are needed to achieve truly intelligent systems, we can abide by that assertion, and still be hopefully be protected by ensuring that the otherwise already-needed error detection capability can cover for whatever untoward action that the “mental disorder” portion might cause.

Admittedly, I’d be quite hesitant at this stage of our collective understanding of the purpose for mental disorders or mental illnesses in humans, and the role it plays in intelligence, for me to be saying that you ought to willy nilly be adding such aspects into your AI system, and simultaneously trying to curtail or remedy them those mental disorders or mental illnesses via an enhanced error processing capability.

Perhaps this is more a future looking kind of approach. Down the road, assume we get stuck trying to achieve true AI, and are unsure of why. We scratch our heads, baffled because we’ve seemingly tried everything that would make “sense” to try and do. Counter-intuitively, the secret sauce it turns out is that we forgot to include mental disorders (well, perhaps we didn’t forget to do so, and instead intentionally avoided doing so), and so now to get to the final level of intelligence we need to add those into our AI systems.

For the nuances of the Turing Test for AI, see my article:

For my article about the potential of a Frankenstein of AI, see:

For the possible rise of super-intelligence, see my article:

For my article about the concerns of an AI singularity, see:

Revealing Of Tops-Down Versus Bottoms-Up AI Approaches

Here’s another twist for you.

First, be aware that there are two major camps of how we’ll achieve true AI.

One camp is the bottoms-up approach that tends to emphasize the Machine Learning or Deep Learning ways of developing an AI system. Typically using a large-scale or deep artificial neural network, this approach is essentially trying to mimic how the brain physically seems to be composed. We don’t yet really know the manner in which thinking arises from the trillions of neurons and quadrillions of synapses in the human brain, but maybe we’ll get lucky in that the efforts to simulate the brain via computational power and artificial neural networks will get us to true AI.

For the other camp, referred to often as the tops-down or symbolist group, the approach consists of pretty much programming our way toward true AI. Rather than trying to mimic the physical attributes of the human brain, we might be able to logically figure out what thinking consists of, and then create it in automation without having to essentially duplicate a brain structure per se.

The top-down camp would likely decry the bottoms-up approach and suggest that it might or might not lead to true AI, but if it does reach true AI, we might not know how it did so. We are only creating another black box and won’t have cracked open its secrets. Fine, say the bottoms-up proponents, since at least we’ll be able to use computational power to do what human intelligence can do, and maybe we don’t need to know how or why it happens but we achieved true AI (plus, there is the chance that during the journey to the black box we might actually unlock its secrets).

The bottoms-up camp might likely decry that the tops-down approach might not ever logically deduce how intelligence arises and be adrift forever trying to figure it out. It could be something that is not explainable in any manner that we can devise. Perhaps it is going to always be a black box. Rather than fruitlessly seeking to guess at the myriad of ways in which intelligence might be invented, let’s not avoid the one thing we have that has intelligence, the actual human brain.

Ahem, excuse me if I’ve somewhat overstated the extremity of the camp positions herein, which I do just for illustrative purposes. I’ll also offer that these are not necessarily mutually exclusive camps that are at dire and acrimonious logger heads (though some are!), and they can and do often work together (yes, they do). Happy campers at times, one might say.

For more about Machine Learning, see my article:

For my article about convolutional neural networks aspects, see:

For the role of probabilities in AI systems, see:

For my article about plasticity in neuroanatomy and Deep Learning, see:

I’m now getting to the twist that I wanted to share with you and will show how the camps matter ties to the topic of mental disorders and mental illnesses.

As stated, we have two overarching AI-aiming camps, one that is trying to build true AI from the bottoms-up, while the other camp is trying to go the route of top-down.

Suppose the bottoms-up camp discovers that mental disorders or mental illnesses emerge as part of the Machine Learning or Deep Learning neural networks approach. It just happens. Not because the camp made it so. Instead, once the large-scale Machine Learning or Deep Learning gets large enough, perhaps various forms of mental disorders and mental illnesses begin to appear as an outcrop of massively sized artificial neural networks.

This goes along with the notion that possibly our mental processing involving the “good” is inextricably connected with the “bad” (if we are going to label mental disorders as such).

If that “surprising” emergence happens, it would be quite interesting and would force us to reconsider what to do about the mental disorders and mental illnesses, which would then be ascribed as artificial mental disorders and artificial mental illnesses (artificial meaning as arising in the AI).

Meanwhile, let’s assume that the other camp, the tops-down advocates, either stumble upon the use of artificial mental disorders, perhaps inadvertently arising from the logics of their AI systems, or decide to purposely include mental disorders, in hopes of seeing whether it boosts overall the true AI attainment. They too might need to cope with the nuances of artificial mental disorders and artificial mental illnesses.

That’s some food for thought about the evolution of AI. Whoa, evolution, it’s all around us.

An entirely different perspective on this topic overall is that it at least highlights the importance of thinking about how mental disorders and mental illnesses arise in the matter of how we think. Not many in the AI field are giving this much due. As stated earlier, when your goal is aiming at perfection, you might not be carefully studying the nature of “imperfection,” but which if you did it might help you toward getting to the perfection that you seek. The yin and the yang, as it were.

Likewise, it is useful to consider what we can learn or glean from human mental disorders and mental illnesses for purposes of building AI systems from an error processing perspective. I’d dare say that the more we put error processing at the forefront of AI development, the better we will all be.

I mention this too because oftentimes it seems that error detection is shouldered solely by an individual AI developer. In my book, it takes a village to properly fight the error detection battle. By this I mean that if you are an individual AI developer and the only one of your team that seems to be devoted to error detection aspects, it is going to be an uphill battle.

You need to have AI leadership and management that embraces the error detection aspects. If the top leaders are only focused on error prevention, they will miss the aspects of error detection, a crucial fail-safe layer to any properly engineered AI system. An individual AI developer might not be provided with the resources, nor the time and rewards, needed to appropriately deal with error detection. In that case, the culture and leadership of the AI team has undermined a vital element of the AI system, and it is oversimplifying to put your gaze solely on the individual AI developer.

For the possibility of noble cause corruption by AI teams, see my article:

For my article about the burnout of AI developers, see:

For the dangers of groupthink in AI teams, see my article:

For the importance of AI internal naysayers, see my article:

For my article about potential egocentric AI developers, see:

Mental Disorders And Aspects Of AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Auto makers and tech firms need to be wise to error detection for AI self-driving cars, particularly since the safety of self-driving cars and humans are at stake. Perhaps mulling over the nature of AI and artificial mental disorders will spark such attention.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of mental disorders and mental illnesses, let’s see how a focus on cognitive impairments might be useful when trying to build robust and reliable AI self-driving cars.

I’ll start by reusing my overall framework about AI self-driving cars, which contains the various overarching elements to be considered about AI self-driving cars. Using a core subset of factors, I’ve put together an indictor of how the AI might exhibit a diminished capacity if any of the selected factors goes awry.

Core Of ABCDEFG Comes To Play

I refer to this as the ABCDEFG, based on the one-word indications that are used to describe each of the seven circumstances.

Let’s start with the letter A and the word Amaurotic.

You might not be familiar with the word amaurotic, which means to have lost your vision or from the Greek meaning to be obscured. This is an apt description of an AI self-driving car that might have some kind of “mental disorder” involving the sensors and their data collection.

The sensors of the self-driving car are the means of the AI being able to detect what is taking place surrounding the AI self-driving car. If those sensors aren’t working properly, the AI would have an inadequate indication of what is taking place around the self-driving car. A pedestrian might not be spotted that is precariously close to where the self-driving car is currently headed. A car ahead of the self-driving car might be misjudged as accelerating forward when it is actually starting to hit the brakes.

An artificial mental disorder or artificial mental illness, which I’m appending the word “artificial” to connote is it something happening within the automation, could cause the sensors to act incorrectly or be interpreted incorrectly.

Suppose the camera is capturing excellent images, and yet the portion of the AI subsystem that interprets those images is acting incorrectly. You or I might look at the images and clearly be able to see a pedestrian, while the AI subsystem interpreting the image might report that the pedestrian is far away or maybe not even there at all.

Why would the AI subsystem falter in such a manner? It could be that there is some kind of error that has arisen within that AI subsystem. Assuming that there is insufficient error checking to catch it, the AI subsystem might pass along its false interpretation to the rest of the AI overall system that is driving the self-driving car.

That’s bad news for the rest of the AI since everything else of the AI self-driving car is taking at face value that the interpretation of the sensory data by the image processing subsystem is working correctly. That’s bad news for any human occupants inside the self-driving car, and bad news for any humans nearby the AI self-driving car, since the odds are that the rest of the AI is going to make poor driving decisions based on the faulty reporting by the sensory “mental disorder” that is occurring.

If you want to do so, we can play with the mental disorder vocabulary a little bit.

Suppose a car is coming down the street and will pass right by the AI self-driving car, heading in the opposite direction of the self-driving car. This happens all the time when you are driving, and you typically don’t give much attention to a car that is coming toward you in the opposing lane and will presumably go alongside you for a brief instant and then go past you.

When you ponder this for a moment, it is actually remarkable that we allow other cars to zip past us, missing your car by just a few scant feet, doing so on busy highways and freeways, often without anything separating us from complete disaster and striking each other head-on at frighteningly fast speeds, other than a painted line on the street.  It should strike terror into us. Instead, we grow numb to the potential for absolute destruction and mayhem.

I recall when my children were first learning to drive that I was at times holding my breath when they drove on busy streets and highways. From the front passenger seat, serving in my role as doting father wanting to help as they became experienced drivers, I couldn’t quite tell how close we were going to be when an opposing car came alongside our car. Often, I was sure that we were going to slam head-on and found myself clinching up at the prospects of it. Fortunately, we did not ram into other cars and nor did other cars ram into us.

Again, nationwide and worldwide, I look at this all as a miracle that on a daily basis we don’t have thousands upon thousands upon thousands of daily head-on killer crashes.

In any case, suppose an AI self-driving car is driving along and another car in the opposing direction is going to eventually come alongside the self-driving car and pass by it. The sensors of the AI self-driving car would normally be detecting the other car, doing so at some distance prior to the point of near crossing of each other. The camera would be capturing images and video streams, out of which the image processing AI subsystem would be relaying to the rest of the AI system that there is an object approaching at a fast speed, it is a car, and it is predicted to pass alongside.

The rest of the AI would likely then have no need to react to this other car. It’s handy to be aware that the other car exists, just in case the AI is trying to determine whether it might be able to use the opposing lane for any upcoming evasive maneuvers that might be otherwise needed. The AI would calculate that the opposing lane is a somewhat risky place now, for the moment, since there’s a car coming along in that lane.

Imagine that the image processing starts to hallucinate or become delusional. I am using those words in a loose manner and don’t necessarily mean those words in a proper clinical psychological way. In the case of the AI subsystem, let’s suppose it has some kind of error or bug and this causes the AI subsystem to categorize the car in the opposing lane as a motorcycle rather than a car. This seems plausible as a result of some internal error.

The error cascades and it causes the AI subsystem that is doing the image interpretation to instead reclassify the “perceived” motorcycle to instead be a dog. This might seem less plausible, but keep in mind that the image processing system likely has lots of classifications for objects that could be detected, including classifying motorized vehicles as to being cars, trucks, motorcycles, etc. Likewise, the classification includes types of animals such as whether a dog is spotted, a cat, a cow, a horse, any of which could be wandering onto a road that the self-driving car might be driving on.

The AI subsystem that has the error is in a manner of speaking delusional in that it now is reporting that an upcoming car is actually a dog. We can add the hallucination aspect by suggesting that the AI subsystem error also causes it to report that there is a cow and a horse there too, running next to the dog. There isn’t any other moving object adjacent to the upcoming car, but the errors inside the automation are so out-of-whack that it is adding objects into the scene that aren’t actually there at all.

This provides an example of how an artificial mental disorder or artificial mental illness could impact the AI self-driving car.

If you want to consider the role of paranoia, we could say that the image processing has an error but different than the one so far described. Suppose the AI subsystem is able to ascertain that a car is in the opposing lane. Unfortunately, due to an error, the AI subsystem makes a prediction that the car is going to strike head-on to the AI self-driving car.

Maybe the way in which the passing alongside software routine works is that if there is a clearance of more than 12 inches the flag is set to safe-to-pass, while if the clearance is less than a foot it will set the flag to head-on. Even though in this case the car is really going to pass alongside at a “safe” distance of say 18 inches, an error in the calculation mistakenly calculates the distance to be 8 inches. This then causes the head-on flag to occur. The rest of the AI receives a head-on indication from the image processing interpretation and would presumably react accordingly.

In fact, the routine is now caught up in this error activity. Anything in the opposing lane is going to get flagged as a head-on. That car is flagged as head-on, a bicyclist in the opposing lane is flagged as a head-on, and a pedestrian that is standing at the curb of the opposing lane is flagged as a head-on.

Does the AI seem to now be a bit paranoid? It “thinks” that everyone is out to get it, coming at the self-driving car head-on. Yikes!

I mentioned that I wanted to use the word “artificial” in front of the phrases of mental disorder and mental illness. Part of the reason to do so is due to the aspect that the manner of how various mental disorders arise in the human mind and the brain is still relatively unknown. We seem to be able to discern the behavioral impacts those mental disorders have, yet we aren’t exactly sure what gives rise to them.

I want to therefore make sure to distinguish that the AI is suffering from a kind of “mental disorder” that is not necessarily doing so in the same underlying manner that the human brain and mind do. Instead, we’re focusing herein on the behavioral results that are similar. By using the word “artificial” I am trying to forewarn that we should not make the logic leap that the AI-based mental disorder is necessarily the same as the human mental disorder aspects in terms of the underlying roots, and instead only on the basis of the behavioral results.

For my article about what happens when sensors go bad, see:

For the myopic debates about sensors and the cyclops notion, see my article:

For when pedestrians potentially can become roadkill, see my article:

For my article about the importance of AI defensive driving tactics, see:

Sensor Fusion And Mental Disorder Aspects

Let’s now consider what would happen to the AI self-driving car if the sensor fusion portion suffered from an artificial mental disorder.

I’d say that the result would be a Bewildered system. The sensor fusion is intended to bring together the various sensory interpretations and try to determine how they compare with each other. This means that if the image processing is saying there is a car coming along, and yet the radar does not detect a car there, the sensor fusion must ascertain what conclusion to reach. It’s a potentially complex effort to ferret out the consistencies and inconsistencies between the multitude of sensors on the self-driving car and what each is suggesting it has found or not found.

When the sensor fusion is fouled up, it might be falsely claiming that the sensors are in disagreement, when they actually all agree as to what is outside of the self-driving car. Or, the sensor fusion might falsely claim that all the sensors are in agreement, when in fact the sensors are differing in terms of what they have each detected. You might characterize this as a kind of being bewildered and unsure of what the surrounding scene contains.

The next word is Chaotic.

If the virtual world model is suffering from an artificial mental disorder, it won’t be able to properly denote where objects in the real-world are. The model is intended to keep track of where objects exist outside of the self-driving car, along with predictions about where those objects are heading. It is kind of like an air traffic control subsystem, wanting to monitor the status of nearby objects.

Imagine if the virtual world modelling subsystem of the AI were to breakdown and start putting objects just anywhere. The car that is in the opposing lane might incorrectly be portrayed as in the same lane as the self-driving car. Or, maybe the pedestrian on the sidewalk is misplaced in the model as though they are standing in the middle of the street.

That would be a chaotic indication.

The word I’d like to cover next is Dysfunctional.

If the AI action planning subsystem of the AI is suffering from an artificial mental disorder, you are going to witness a dysfunctional AI self-driving car. Suppose the sensors are working just fine, the sensor fusion is working just fine, and the virtual world modelling is working just fine. Meanwhile, when the AI action planner inspects the virtual world model, the action planner is messing up and has some form of error in it.

Even though the sensors are reporting that the car in the opposing lane is going to pass alongside safely, and the sensor fusion supports that indication, and the virtual world model clearly states as such, the AI action planner is living in its own dream world. As such, it ignores what those other subsystems have indicated. Thus, maybe the AI action planner decides that it would be best for the AI self-driving car to swerve into the opposing lane, doing so under a false belief that the car in the opposing lane is coming into the existing lane of the AI self-driving car.

This is dysfunctional or worse.

The next word is Errant.

For the car controls commands issuance, this subsystem of the AI is intended to generate instructions to the car as to what it is supposed to physically next do, such as accelerating, braking, and the direction of the steering of the car. Suppose the sensors detected an opposing car that was going to pass alongside safely, the sensor fusion concurred, the virtual world model concurred, the AI action planner concurred, and so up until this point there is no action specified to take.

Unfortunately, if the car controls command issuance is suffering from an artificial mental disorder, it might decide to turn the steering wheel directly into the path of that oncoming car. An error of some kind has inadvertently turned a result from the AI action planner that said to stay straight and instead changed it to adjust the steering wheel for a sharp left maneuver into the opposing lane.

This is errant or worse.

The next word is Flailing.

For the strategic AI elements of the self-driving car, suppose that an artificial mental disorder arose. For example, maybe the AI self-driving car is supposed to be headed to downtown Los Angeles. An error though in the strategic AI elements gets things messed-up and the AI is led toward Las Vegas, Nevada. Maybe the strategic AI is so error laden that it keeps changing where the destination is supposed to be. The self-driving car seems to be changing from one direction to the other, no rhyme or reason apparent as to it doing so.

This is flailing or worse.

The last word to cover is Garbled.

If the self-aware AI aspects aren’t able to do a proper effort toward tracking how well the rest of the AI system is working, perhaps due to an artificial mental disorder, it could lead to a garbling of what the AI self-driving car is going to do. One moment the self-aware AI is informing the rest of the AI it is doing well, and the next moment it is warning that one element or another is fouled up.

This is being garbled or worse.

For my article about the importance of pre-mortem analysis, see:

For safety aspects, see my article:

For my article about the crucial need for fail-safe systems, see:

For how cognitive timing of the AI system is essential, see my article:


Mental disorders and mental illnesses are a substantial part of the human experience.


Evolution might suggest that we should be rid of those aspects by now. Maybe though it is something still being worked out by evolution and we are merely in the middle of things, and therefore cannot say for sure whether those disorders and illnesses will continue or gradually be diminished based on a survival of the fittest path.

Will AI need to include mental disorders or mental illness if indeed those facets are inextricably tied into human intelligence, and perhaps the only means to reach true intelligence is to include those factors? If so, what does it mean about how we are developing AI systems today. Including artificial mental disorders or artificial mental illnesses seems quite counter-intuitive to the usual belief that AI systems need to be free of any such potential downfalls.

It could be that the basis for including artificial mental disorders or artificial mental illnesses is either of merit on its own, or that we can use the basis to then be more circumspect about how AI systems need to cope with internal “cognitive impairments” or internal errors that might arise in the “thinking” elements of the AI system.

Regardless of whether you think it might be preposterous to consider mental disorders or mental illnesses in the context of building AI systems, you might at least be open to the notion that it brings up the importance of making sure AI systems are as error detecting and correcting as they can be.

If we can be somewhat liberal with the use of the terminology of mental disorder and mental illness, and restate it as a form of internal mental errors, and if AI systems are supposed to be crafted on some kind of considered mental processing, we can use this to highlight the importance of individual AI developers taking error handling seriously, and get the AI teams to do the same. It takes a village to cope with the mental disorders and mental illnesses, both of society as a whole and of AI systems in of themselves, and we all need to work on this.

I’d say there’s no mental confusion on that key point.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.