By Lance Eliot, the AI Trends Insider
When I was a youngster, some of my playmates would hurl a verbal insult at one other by saying that the person was a lizard brain.
Hey, you, yes you, the dolt standing here on the basketball court in the way of our playing a game, get you and your dimwitted lizard brain off the darned court, they would yell out.
I don’t believe that the same taunt of being referred to as a lizard brain is used much anymore and it has ultimately gone by the wayside as a toss-able insult for kids.
Lizard brain was handy as a power-packed quick-response snub that could be used in a wide variety of circumstances. If you wanted to upscale the wording, you could instead say that someone had a reptilian brain. A less potent version would be to say that someone had a primitive brain, though the word “primitive” does not have much panache and lacks a gutsy zest or spirit to it. Overall, the implication of any of the variations of lizard brain was that you were presumably as dumb as a lizard, which, I suppose we could argue at length about whether lizards should so readily be categorized as dumb and it might be a somewhat unfair form of bias or prejudice.
Background About Our Triune Brain
There was a kind of scientific revival of referring to a lizard or reptilian brain in the 1990’s when a book by Paul MacLean came out, known today as a now-classic entitled “The Triune Brain in Evolution.”
His research had been going on for many years and he had been gradually formulating his theory of the human brain, namely that it consists of three major portions, thusly referred to as the triune brain. For those of you that enjoy words, triune is a fancy way of saying that you have something that consists of three things in one.
The triune brain theory postulates that the human brain physically evolved over time and consists of three separate parts. Presumably, evolution of the brain over time coincides with the rise of humanity and the bolstering of our thinking processes.
The three parts are united in that they ultimately work together in various fashions to undertake human thinking. Though there is a united aspect, they are nonetheless considered distinctive in their own right each.
Furthermore, the triune perspective suggests that it is feasible and reasonable to ascribe particular kinds of thinking-related functionality to each of the three parts.
This notion of having three separable and distinct functions might be likened to certain aspects of a car. If I had a car engine in front of us, I might tell you that one part has to do with the generating of thrust that is used to propel the car, and there is also a part or segment for keeping the engine cool by the use of liquid or air, and a third part or potion that lubricates the engine. That’s a triune.
We all would agree that those three elements or portions of the engine are necessary to achieve full and appropriate use of the engine. I say this because we could pretend that we might eliminate any of the three, and we’d end-up with problems, I dare assert. Get rid of the cooling system and pretty quickly the engine will overheat and likely seize up. If you had the cooling system and the lubrication, but took out the engine thrust portion, you wouldn’t be getting anywhere soon. And so on.
In the case of the triune brain theory, the three proposed portions are named as follows:
- Reptilian portion (also known as the Lizard Brain)
- Paleomammalian portion (also known as the Limbic System)
- Neomammalian portion (also known as the so-called Thinking Brain)
I’ve already introduced to you the Reptilian portion by mentioning that it used to be an insulting reference to suggest that a human being has a reptilian or lizard brain. In the triune theory, this portion of the brain has to do with your instincts. It includes brain elements that are often described as the brainstem and the striatum (also known as the basal ganglia or nuclei). In a manner of speaking, you could say it is the “blockhead” part of the brain that does the simplest and least thoughtful kinds of thinking efforts.
For example, you are in the woods and see an angry bear. Your first instinctive reaction is to use the much-valued fight-or-flight response. You will either immediately start to hightail it out of there and hope that you can outrun the bear, or you might instead opt to stand your ground and take on the bear in a one-on-one battle royale. There’s a moment of first reaction in which you aren’t “thinking” thoughtfully about those two options. In nearly one instantaneous split second, your feet start to run, and you go along, or you put up your dukes and wait to see how angry this bear really is.
Presumably, the fight-or-flight response is being undertaken by the Reptilian portion of your brain. It has your core instinctive capabilities. The basics of survival are burned into that Lizardry segment. It tends to operate very fast. There isn’t much processing presumably taking place. The reptile-like reflex is a handy tool in your brain since it can kick into gear immediately and spur your body into similarly quick reactions.
Let’s now consider the second portion of the triune brain.
The paleomammalian portion of your brain is said to consist of higher levels of thinking capabilities, including your emotions, your overall memory storage and access, your behavioral fundamentals such as parenting behavior and reproductive behavior. I had mentioned earlier that the triune theory postulates that our brain evolved over time and as such, this paleomammalian portion is considered the next step up in a mammalian evolution, beyond the Reptilian portion.
Okay, let’s get back to the vexing and dangerous moment of standing in front of an angry bear. Suppose your Reptilian portion had started your body to run away from the bear. On the heels of that action (pun!), the paleomammalian portion might begin to emerge in your mind, adding some thinking aspects about the bear. Maybe you begin to contemplate that the bear could catch-up with you and it scares the heck out of you. Your mind now races with the emotions of the moment. Up until the paleomammalian portion getting into the matter, you were going solely on instincts. Now, the emotional roller coaster kicks into gear.
Let’s add the third portion of the triune into the matter, the neomammalian portion.
The neomammalian segment of your brain is the higher-level thinking element of your mind. With this portion, you are able to think in abstract ways, you can communicate using language, you can mentally craft plans and carry them out. From an evolutionary perspective, this third portion of the brain came along after the other two. It is what makes us apparently differentiable from animals in that it gives us the brain superpower of being able to think in lofty terms, composing Shakespeare, designing rockets to get us to the moon, and being able to make sense of E=MC squared.
In the case of the angry bear, the neomammalian portion might get involved and identify that running away is not going to be very effective since there is a sheer cliff in the only direction that you can run. This might then have the neomammalian portion concoct a plan of having you climb a tree instead and try to get out of the reach of the bear, and perhaps be able to kick at the bear if it tries to climb the tree too.
All three portions are now chiming in about the bear situation. It’s hard to say which of the three portions will necessarily prevail in this setting.
The Reptilian portion might be overriding any of the rest of your mind and forcing you to act on instinct. It could be that the paleomammalian rush of emotions is going to cloud the Reptilian instincts, and meanwhile the neomammalian portion intercedes and tells them both to put aside their noisy efforts and let it solve the problem at-hand.
Three Minds In One Mind But Not Of One Mind Necessarily
When I mentioned that the triune theory postulates that the three portions are united, I was not stating that they are always in agreement. They might be diametrically opposed to each other. The angry bear circumstance can be used to highlight this kind of tension between each of the three portions. You’ve likely spoken with people that told you they reacted in a situation out of instinct, though they believed that the rest of their mind was arguing for an alternative approach to solving the crisis.
At any point in time, any of the three portions might prevail in terms of shaping your thinking and your efforts. There can be a lack of balance in the sense that one prevails, or two prevail, over the other portion or portions.
They could also all three be perfectly aligned.
Suppose the Reptilian portion indicated stand and fight that imposing bear, and the Paleomammalian was infusing you with a fierce sense of protecting your own child (we’ll add that your son or daughter is standing there with you), and the Neomammalian portion analyzed the numerous avenues of stay or escape and concluded that challenging the bear was the right thing to do. All three portions happen to be in agreement.
I’m sure you seen people say that they hear voices in their head. Assuming that you aren’t mentally deranged, it could be that you are somehow able to sense or realize the debate among the three portions of your triune brain.
Be aware though that there are some that say you cannot really “know” what your brain is doing and that any belief that you sensed an internal debate of the triune is completely made-up by you. It is perhaps your neomammalian portion is preparing a nifty story about what your brain is doing and has nothing to do with what is actually occurring in your brain. At some point, you might have been told that your brain has the three portions and that they can argue, so your neomammalian “thinking” portion has grabbed hold of that idea and gets you to believe that you can introspectively sense your mental processes.
This then brings us too to the matter of whether you buy into the triune brain theory.
Some would say that the theory was handy at the time that it was being proposed. It helped us to get our hands around the vast complexities of the human brain. It sparked discussion and research into the biologically mechanical and chemical inner workings of the brain. For a slew of good reasons, the theory was helpful.
There are now some that argue the triune brain theory is a grossly oversimplified way of modeling the brain. As such, they tend to say that we need to depart from the triune theory. If we remain wedded to the triune theory, we’ll merely continue to chop away at trying to figure out these three parts of the brain. It will constrain how we approach analyzing the brain and the thinking that arises from our brain.
Maybe there are really five major portions of the brain. If so, we are incorrectly and artificially imposing a three-portion model onto something that really consists of five portions. This means that ultimately our three-portion model will need to come apart if we are going to make true further progress. Meanwhile, those clinging to the three-portion model might be missing the bigger picture and might be holding back the discovery of the five major portions.
Let’s also consider that the five portions aren’t necessarily just arrived at by adding two portions to the three that have already been delineated. Maybe we need to completely recast the original three portions. Toss aside the three portions to free your model to be whatever it is to be and start anew to come up with the five major portions.
On the oversimplification criticism, it might also be oversimplified to postulate that there are only say three to five major portions. Perhaps there are a dozen. Maybe there are a hundred major portions. Why does the brain need to be only a small number of major portions? That’s just a means to make things simpler for us to grasp what it is, but this doesn’t necessarily need to be the reality of how the brain actually is structured.
Besides the rather simplistic appeal of the triune theory as a kind of threepeat, it also has an added allure that it claims the three portions are based on evolution. This provides an added punch to bolster the theory because it offers a basis for why the three exist and how they came to be. Any competing theory is going to have to somehow contend with the evolution strengthening associated with the triune theory.
Comparative Neuroanatomy Is Included
Allow me to explain. If you come up with a competing theory that says the brain has five major portions, it will right away be questioned as to how or why the brain has five major portions. What is the cause for this structure? What justifies it?
In the triune theory, we get the nicely with-a-bow-wrapped aspect that the three each evolved over time. They progressively got us toward greater and greater levels of thinking. Each portion has its own set of thinking-like elements. Furthermore, there are lots of other things in life that we ascribe to threes, and so the three portions of the brain are quite a nice fit to our overarching human-led believes about the magic of the number three.
Those multiple layers of neatly packaged justification for the triune brain theory are what makes it so compelling and enduring. It is also what makes it hard to dispel. You have to undermine the evolutionary aspects to undercut its rationalization as a theory. You have to argue that the separability of the mental functions is not true in terms of what the brain actually does.
So, which is it, does the triune brain theory provide us with a rich map to guide our efforts to dig into the brain and figure out what makes us think and what makes us tick, or does the theory potentially hamper our efforts and put a constraint around how we maybe should be studying the brain. The model could be shackling our efforts and we regrettably don’t realize that’s the case.
If you examine the “evolution” of the triune brain theory, much of it arises from research in comparative neuroanatomy.
In this case, I am referring to how the triune brain theory itself was formulated and offer insight that might be used to both explain triune brain theory and perhaps ultimately be an avenue that either reinforces it or might ultimately challenge and undermine it.
Comparative neuroanatomy is an approach to studying the brain that says we might be able to figure out the human brain by comparing it to the brains of other animals.
By doing a comparison and a contrasting of human brains versus animal brains, we can perhaps discover what we have that they don’t, and this added piece might be the final piece in the puzzle that makes us thinker and humans. Note that there might also be brain portions that animals have that we don’t have, and for which perhaps those are pieces that we once jettisoned and doing so aided our emergence of human intelligence. It is important to consider the full range of comparison and contrasting. Don’t throw anything out along the way.
Researchers that undertake comparative neuroanatomy typically look quite closely at physical brains. What is the size of a human brain? What is the size of a mouse brain? What is the size of a monkey brain? They also look at the structures of the brain. How many neurons does each type of brain have? How many synapses? How do they seem to be intertwined?
There is the black box approach too. Rather than trying to carve apart brains like you do turkeys at a Thanksgiving dinner, maybe focus on the behaviors that result from having brains. What kinds of thinking and solving of problems can a human brain accomplish? What about for mice? What about for monkeys?
AI And The Triune Brain
This now takes us into the realm of Artificial Intelligence (AI).
One of the most vocal debates about trying to create automation that exhibits intelligent behavior is whether you need to first know how the human brain physically works, or whether you can skip that aspect and just aim at the behaviors that emerge out of thinking humans.
The triune brain theory attempts to cover both the physical inner workings of the brain and also commingle that with the resulting thinking behaviors that arise from the brain. Some might say you aren’t going to get to unlock both. Trying to get both the inner aspects and the outer aspects figured out might be too much. You are biting off more than you can chew.
As such, some say that for AI, it could be that trying to crack the inner code of the brain structure and how it works, well, we might not ever figure that out. Or, it might take eons to figure it out. Thus, if you are predicating achieving true AI based on the nut cracking of the human brain, forget it since the brain will remain an enigma for a very long time. You are on a fool’s errand if you are putting first the need to decipher the brain, some say.
Those that say we should aim to achieve the end-results of thinking and not care how it arises in the brain, they too are readily criticized. Some would say they are failing to leverage that which we have all around us and readily at our fingertips for studying, namely the human brain. If you are not going to use the brain as your basis to arrive at intelligence, you are then presumably having to find a means that otherwise does not exist, or you are accused of somewhat blindly trying to retrace the evolutionary cycle that took thousands upon thousands upon thousands of years to “figure” out how to arrive at intelligence.
Darned if you do, darned if you don’t.
For those that are developing Deep Learning systems and using artificial neural networks, particularly the use of deep or large-scale neural networks, it might be suggested they are trying to go the route of the inner workings of the brain. They seem to assume that if you amass enough of the linchpins of what the brain seems to be composed of, voila there will be intelligence that emerges from the spaghetti.
There are critics though that say the use of artificial neural networks is not particularly based on the real-world aspects of what we know or have yet to discover about the rudimentary wiring of the brain. It might seem like it, from a surface or simplification basis, but otherwise it is not at all the same thing. It is a mathematical simplification, some say an oversimplification.
Furthermore, there are some that assert we are not doing enough of a “comparative neuroanatomy” within the realm of artificial neural networks. Generally, nearly all of the neural networks being done on a large-scale basis are not being done in a manner that allows a comparison and contrasting between them. Each is its own one-off. Each is often hidden from other researchers and not revealed so that others can see what it is composed of.
In the case of a human brain, a mouse brain, and a monkey brain, you can relatively readily dig into those brains and try to compare and contrast them. Sure, you might argue that many of the factors being used to compare and contrast might not have much to do with how intelligence arises in brains. We might be using metrics that aren’t correlated to intelligence and therefore those measures or metrics could be misleading.
But at least the comparisons and contrasts can be made. The same cannot be as readily stated about the large-scale or deep artificial neural networks.
For my article about explanation-AI and deep neural networks, see: https://www.aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/
For my article about deep learning and plasticity, see: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/
For my article about the Turing Test and AI, see: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/
For the irreproducibility problem in the field of AI and Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/
For why we need more transparency in AI, see my article: https://www.aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/
AI Self-Driving Driverless Autonomous Cars
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One interesting aspect involves whether the triune brain theory can be applied to the AI systems being developed for AI self-driving cars. We believe so.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the topic of the triune brain theory, let’s consider how this relates to AI and the advent of AI self-driving cars.
The first aspect involves whether the AI of self-driving cars should be based on a primarily brain-based underlying structure, vis-à-vis Deep Learning and large-scale neural networks, or whether it should be based on a symbolistic approach of focusing on artificial intelligence that is exhibited in human driving behavior.
I earlier described that there is an ongoing and vocal debate about which of those two approaches is the sounder and more likely to get us toward true AI.
Currently, other than the use of Deep Learning and deep neural networks in the sensory data portion of an AI self-driving car, there is actually not a significant amount of the AI in an AI self-driving car that is shaped around the notion of a brain-based kind of structure. For now, the prevailing Version 1.0 of AI self-driving cars is going to be based on a more programmatic construct, and we’ll have to wait and see how well this pans out, plus it could be that the Version 2.0 of AI self-driving cars swings further into the brain-based kind of structures, especially as that neural network style approach further evolves to become more robust.
The second aspect to consider is the notion of comparative neuroanatomy. I had earlier mentioned that there is relatively scant comparison and contrasting going on in the development of Deep Learning and large-scale neural networks. Developments tend to be proprietary and not provided for wide open analyses and comparisons.
The same kind of proprietary and shall we say secretive approach is being used by the auto makers and the tech firms that are crafting the AI for self-driving cars. There is no readily available means to do any kind of comparison or contrasting of the numerous underway AI self-driving car efforts, other than to try and examine any outward metrics such as number of miles driven and number of disengagements, though these are woeful metrics for doing any under-the-hood assessments and comparisons.
This is not to suggest that they are somehow wrong to be so secretive. The investment costs in developing the AI for self-driving cars is enormous and each of the auto makers and tech firms is hopeful of recouping those costs by the revenues they’ll derive once their creations are functioning.
For my article about stealing AI self-driving car secrets, see: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/
For the emergence of ridesharing as a huge market for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/
For the efforts to reverse engineer AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/
For my article about the relatively minimal open source efforts of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/caveats-open-source-self-driving-cars/
It is assumed by many that the first to the trough of self-driving cars is going to capture the market, a treasure trove awaits, and so why should any of these firms be willing to widely share their expensive secret sauce? It doesn’t make much dollars-and-sense to do so. It could also jeopardize each respective efforts to cross the finish line first.
Others contend that if AI self-driving cars begin to get into various car accidents, there is a chance that the government will step harder into the fray. This could potentially include forcing the auto makers and tech firms to make available the inner guts of their AI systems, doing so to grapple with what might be a perceived lack of attention to safety aspects.
This aiming to open the kimono might also be undertaken via lawsuits brought against the auto makers and tech firms. If AI self-driving cars do get into various car accidents, you can bet that lawyers will be bringing a slew of lawsuits and will argue that too little was done on safety. To some degree, this will bring the inner portions of the AI systems into the courtroom and into the spotlight.
For my article about the upcoming lawsuits bonanza, see: https://www.aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
For the emergence of class action lawsuits, see my article: https://www.aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/
For the disingenuous disengagement reporting, see my article: https://www.aitrends.com/business-applications/disingenuous-disengagements-reporting-ai-self-driving-cars/
For my article about federal regulations, see: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
Autonomous Cars And The Triune Brain
Let’s shift our attention now toward the triune brain theory and its claim of three major portions of the human brain.
Recall, it is three portions, each separate, yet also united in their efforts, and are presumably based on evolution over time, encompassing becoming more elevated in terms of increasing levels of thinking capabilities.
As far as I know, there aren’t any similar triune type of efforts underway by the auto makers or tech firms in terms of how they have opted to organize or structure their AI systems for their self-driving cars. In that sense, there isn’t the use of a “three major portions” to the AI systems of self-driving cars.
If you were to macroscopically look at their AI systems for their self-driving cars, my framework that I earlier mentioned would be closer to the notion of dividing up separate portions that work in a united fashion, including for example a sensor data collection and interpretation, a sensor fusion, a virtual model updating, an AI action planning, and a car controls command issuance portions. This involves at least five major system portions, though there are many more and my framework depicts those further.
For my overall framework about AI self-driving cars (as mentioned earlier herein), see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
Overall, I don’t believe we’ve gotten the AI systems of self-driving cars to fall into a rut by deciding to try and stick to some three-major portions notion. That’s the good news.
The not so good news is that some of the AI systems of self-driving cars are overly complex. They are not well structured. They are not well organized. Their existing structure and organization is more akin to being byzantine than it is to being carefully and systematic composed. That’s worrisome.
You might be wondering how such a modern-day AI system could be anything but perfectly well structured. The answer is that most of these AI systems have been rapidly evolving, partially due to the race to see who gets to the moon first. Pressures to push forward on getting the AI up-and-going are so tremendous that it is difficult to be mindful of how you are putting things together.
There’s a famous line among software engineers in AI self-driving cars, namely that there isn’t any style when you are in a knife fight. Caring about style is way down on the list when you are dealing with pure survival issues and the knife fight is earnestly underway. That’s what is happening in the AI self-driving car arena. It might not seem like a knife fight to those on the outskirts of the industry but be aware that within the industry it is a fierce and ongoing take-no-prisoners environment.
For my article about burned out AI developers, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/
For the dangers of groupthink among AI teams, see my article: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/
For the suppressing of internal AI developer naysayers, see my article: https://www.aitrends.com/selfdrivingcars/internal-naysayers-and-ai-self-driving-cars/
For the idealism of some AI developers, see my article: https://www.aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/
The point being that rather than being overly constrained to a limited set of major systems or subsystems, and then hanging everything else off of those structures, there tends to be a more organic and sprawling structure to the AI systems of many self-driving cars underway.
With this kind of sprawl, there is a heightened chance of hidden bugs and errors. There is a much hard time involved in rooting out problems. Likewise trying to include safety, or perhaps retrofitting safety, becomes problematic. I am pretty sure that once AI self-driving car accidents begin to occur, and when the heat is turned on by the lawsuits and potential regulatory action, the laying bare of some of these AI systems is going to be ugly.
Let’s consider another element of the triune brain theory and see how it applies to AI self-driving cars. One crucial aspect is that the three major portions of the brain are separate and yet united. They work together, though this does not mean they necessarily get along. The case of the angry bear helped to illustrate that the three portions might have quite different reactions to the same situation.
This is definitely an aspect to be wary about the AI systems of self-driving cars. With the perhaps overly complex nature of the AI systems and subsystems in a self-driving car, in theory they are working separately and yet are united. The united aspect tends to be shaped around a centralized controller.
Sadly, there are some AI developers and AI self-driving cars that have not yet vetted the numerous points of contention between the complex sprawl of AI systems and subsystems in their self-driving car.
This means that you might have an image processing portion that examines a camera image or video stream in real-time and determines that the road ahead is clear, and meanwhile the radar processing portion determines that there might be a truck or similar large object crossing the road ahead of the in-motion self-driving car. Some believe this might have been a factor for example in the real-world case of the Tesla in Florida that ended-up in a deadly crash.
For my article about Tesla aspects, see: https://www.aitrends.com/selfdrivingcars/forensic-analysis-of-tesla-crash-based-on-preliminary-ntsb-june-2018-report/
These kinds of internal AI systems and subsystems contentions are akin to reacting to the angry bear in my story earlier. Which of the competing “viewpoints” about what is ahead should prevail when the AI action planner has to decide whether to continue the car unabated forward or maybe do an emergency stop?
If you begin to calculate the number of AI subsystems and systems in a self-driving car, and multiply in such a manner to consider the number of potential internal contentions, it becomes clear that unless the AI developers are being quite meticulous about their building of contingencies, at some point a loophole is going to be reached. The loophole might arise once in a blue moon, but when you are dealing with a multi-ton car that is going at 65 miles per hour, blue moons are going to be costly in terms of the potential of human injuries or deaths.
For my article about egocentric AI developers, see: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/
For safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/
For the ghosts that will appear in AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/
For my article about debugging of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/
For the hiring and managing of AI rockstars, see my article: https://www.aitrends.com/selfdrivingcars/hiring-and-managing-ai-rockstars-the-case-of-ai-self-driving-cars/
Being Fast Is Important
Another interesting aspect of the triune brain theory consists of the notion that the Reptilian portion is likely to react more quickly than the other portions. It’s the gut instinctive reaction mechanism.
This can be welcomed when you are faced with a rapidly emerging situation for which there might not be time to think things through. Merely reacting upon impulse might be the difference between making it out of a dire situation versus not.
Upon seeing an angry bear, the split seconds involved in allowing the neomammalian portion (the Thinking Brain) to ponder what to do, it could be that the bear has time to grab you and your options are now narrowed, such as you no longer have the opportunity to run away.
Meanwhile, the Reptilian or Lizard Brain could maybe have saved you, doing so by acting instinctively. Of course, the Reptilian portion could also cause your death, since the instinct might be to fight the bear, while the thinking brain might have realized that fighting the bear was a no-chance solution. There are tradeoffs in terms of the which of the portions might prevail.
But the essence on this point is that the Reptilian portion is suggested as being the fastest of the three portions of the brain.
We can leverage that notion into the design of AI self-driving cars.
One of the biggest issues confronting an AI self-driving is the time factor. The AI system must be continually watching the clock.
A car that’s in-motion at 65 miles per hour has a limited amount of time to decide what action to take. The AI cannot meander or ponder excessively a myriad of options. Indeed, similar to the angry bear, if the AI is in the midst of determining that it could escape getting hit from behind by another car, doing so by slipping into a small gap between two cars to the right of the self-driving car, it could be that by the time the AI decides to move into the gap, the gap has dissipated because the cars in the other lane have moved forward.
With the self-driving car in-motion and when complicated by other nearby cars also in motion, the timing of figuring out what to do must be relatively fast. Options as to maneuvers of the self-driving car will only be possible in short windows of time. The longer the AI goes to try and identify what to do, the odds are that the number of available and viable avenues of safety are going to be reduced.
I refer to this timing matter as the “cognition timing” of the AI self-driving car. This is a real-time system and therefore must be battling the clock at every moment. When the Uber self-driving car incident occurred in Phoenix, I had right away predicted that it might be partially due to an internal timing aspect, and it turns out that I was right. Time is king in an AI system and subsystems of a self-driving car.
For my article about cognition timing, see: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
For my initial predictions about the Uber incident, see my article: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/
For my article about subsequent indication about the Uber incident, see: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/
For the need of fail-safe AI for self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/
Wrapping It All Up
Pulling together then the triune brain theory model with the need for fast processing by the AI of a self-driving car, we are advocates of an approach of having a kind of Reptilian portion of the AI system for a self-driving car.
Here’s what we mean by this Reptilian metaphor.
There should be a core aspect of the overall AI system that acts like an instinctive portion. It is relatively stripped down in comparison to the full-blown and likely overly complex entire AI system and subsystems of the self-driving car. This tightly woven and smaller core is the last-man-standing if the clock has run out of time and something needs to be done.
The overarching AI system might get itself tied into a knot and not be able to pull out its head in time to realize that something must be done about the control of the self-driving car. In a circumstance whereby the self-driving car has gotten into a dire situation, the default of inaction because the AI overall system has gotten itself bogged down would seem undesirable as an approach.
In lieu of the overarching AI being able to proceed, the core or instinctive portion would step into the matter. Due to being stripped down, it is built and has been tested to be fast, very fast. As needed, it would issue car controls commands of a fundamental nature to try and save the day.
I’d like to emphasize that this is a last-resort option. The core is simplistic. It does not have the means to make the more robust kinds of decisions that the fuller AI system and its array of subsystems does. The instinctive choices it makes can be the wrong choices. We’re focusing herein on the difference between making no choice, assuming that the fuller AI has not been able to reach a conclusion of what to do, and making some choice, though albeit one that is off-the-cuff.
For some AI developers, this idea that there would be a stripped-down Reptilian-like core that could make any decisions and issue cars control commands is horrifying and entirely out-of-the-question. No way, they would say. You cannot drop down to an instinct for driving of a car. Abysmal!
I would certainly and wholeheartedly agree that it is quite unappetizing.
If you can instead guarantee that the fuller AI system will never get into a bogged down state, of which it is unable to make a needed decision in time (suggesting that no car control commands will be issued and whatever the self-driving car is doing will continue by default), and the AI overarching system is so solid assuming that this absolutely will not ever happen, the Reptilian-like core is most certainly not needed. Scrap the Reptilian, in that case.
I have serious doubts that anyone can reasonably issue such a guarantee.
Therefore, the Reptilian gets back onto the table as a last-resort option.
Of course, this is not so easy to build and nor to invoke.
What portion of the AI system and subsystems will decide that the Reptilian core should be invoked? It could be a Catch-22. The overall AI system is so hopelessly engrained in what it is doing that it fails to realize the clock is out-of-time and therefore fails to hand the reins over to the Reptilian core. In that case, the Reptilian was there, but not invoked, and it is a sad day that the very contingency put in place had no chance to kick into gear.
If you say that the Reptilian-core can invoke itself, which presumably is how the triune brain theory postulates that things happen, we are then faced with a different kind of problem. Let’s suppose the neomammalian portion of the AI system is doing its thinking thing, and the Reptilian-core will activate when say the clock is reaching a preset time threshold of a countdown.
Okay, so the overarching AI system is trying to consider a myriad of options and examining the sensory data and the rest. The time threshold is reached. The Reptilian-core leaps to life. It does a rapid analysis and decides that the brakes should be stomped upon, doing so by immediately issuing a full-stop command to the braking system of the car.
It could be that the Reptilian just saved the human occupants in the self-driving car. The self-driving car comes to a screeching halt. It was about to ram into a stopped car that is full of humans that are on their way to a baseball game. Those humans are also saved by the instinctive Reptilian.
Not wanting to mislead you into believing the Reptilian will always be right, let’s reconsider the scenario and assume that the Reptilian does decide to stomp on the brakes. Unfortunately, doing so causes the car behind the self-driving car to ram into the self-driving car. This kills the occupants of the self-driving car. It also kills the occupants in the ramming car. Oops. Bad choice by the Reptilian.
The real twist that I was trying to take you toward was the notion that it could be that the Reptilian gets invoked, due to the time threshold countdown, and while the Reptilian is deciding what to do, the neomammalian portion of the AI system and subsystem finishes figuring out what to do. The thinking portion says to push full throttle and accelerate out of the crisis. The Reptilian says to hit full brake and come to an immediate halt.
Yikes, these are diametrically opposed viewpoints!
We’ve already discussed that the same can happen in the triune brain theory model. Each of the three major portions of the brain are separate and can reach their own conclusions about what to do. They might not agree with each other. In your own brain, which of the three prevails? It is likely contextually determined rather than necessarily principled.
In any case, there would need to be a thoughtfully composed hand-off mechanism about when the Reptilian-core of the AI self-driving car is to be invoked, and what to do if during the live action of the Reptilian that the overarching AI system is ready to take back control. This is generally true of any relatively complex real-time system and an issue at the forefront of properly done real-time designs.
The triune brain theory is fascinating and provides much food-for-thought about how we humans seem to be able to think. It has been a useful pair of glasses in which to see the world of the mind and attempt to investigate it. The simplicity has wide appeal and makes the theory accessible to the public and to those steeped into the science of the brain.
There are those that have gradually come to believe that the triune model is oversimplified. This could undermine research by falsely portraying a structure that does not truly exist. Worse still, it might blind us from seeing the true structure, or constrain us from a willingness to explore and find the true structure.
As a metaphor for the design of AI systems, we can use the Reptilian portion as an indicator that there are going to be times at which a real-time AI system might need an instinctive core that is fast and streamlined. Going on instinct or guts is not risk free. In fact, it is likely much higher risk than using the other two portions of the triune brain, but if those portions aren’t able to get the job done, it might be that instinct will rue the day.
What is also applicable about the triune brain theory is the basis of using comparative neuroanatomy. It sure seems like it might be advantageous to try and do the same kind of comparisons and contrasts among large-scale Deep Learning neural networks. It could provide impetus for making greater progress on that front.
The next time that you are confronted with a personal crisis of some kind, perhaps you come upon an angry bear in the woods, try to see if you can sense your brain rattling around with thoughts about the situation, and whether it seems like those mental thoughts divide into the three major portions of the triune brain theory. As a caveat, please don’t stand there too long trying to do this introspection, since I’d prefer that you escape the angry bear first. Use your Reptilian portion, even if it means that someone might later call you a Lizard Brain. It would be worth it.
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.