Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

3684

By Lance Eliot, the AI Trends Insider

One of the most discussed advancing frontiers is plasticity.

At the forefront of the fields of cognition, biology, social ecology, physics, chemistry, computer science, neural science and studies of the brain (involving neuroplasticity), and many other disciplines, plasticity refers to the adaptability of an organism or equivalent to be able to change and adapt to its environment or habitat.

There have been recently reported cases of phenotypic plasticity in certain kinds of toads, roundworms, lizards, and other creatures that has caused some evolutionary biologists to take a second look at Darwin’s theories of evolution. We all know from our science and history classes that Darwin shook-up the world when he proposed his theory that survival of the fittest implies that organisms don’t just suddenly change their core traits to fit to the environment.

Instead, there are presumed random genetic mutations that change a trait and for which if the changed trait is a better fit to the environment, the mutated creature will tend to survive and procreate in that environment. As the mutated creature continues to survive and procreate over some number of generations, more so than the unmutated similar organisms that are a lesser fit to the environment, the mutated one becomes prevalent and the other one(s) gradually diminish or die off.

Prior to Darwin, there was some naturalists such as Jean-Baptiste Lamarck that postulated it might be possible for evolutionary change to happen in the midst of a single lifetime and not need to work itself out over multiple generations. It was Darwin and others of his ilk that asserted that the “single lifetime” approach was essentially infeasible and unlikely, and that the notion of a multi-generational playout was seemingly more logical and likely.

Let’s consider the use case of a giraffe and its neck.

Suppose we have a bunch of giraffes and they all have long necks. These long necks allow them to eat leaves from acacia trees and they need to consume around 75 pounds of such food per day to remain hunger-free. The acacia trees have thorns that tend to prevent other animals from eating the leaves, especially at the lower realms of the tree, and the long necks of the giraffe gives it an environmental advantage since they can reach higher up in the tree.

Using Darwin’s theory of the world, let’s pretend that we have a giraffe that gets born with a much shorter neck. Assume it is a random mutation of the neck gene of giraffes. What will happen to the shorter necked giraffe? It might or might not survive in its lifetime, perhaps starving off because it cannot reach the higher plentiful and available leaves of the acacia tree.

Let’s imagine that this shorter neck giraffe manages to mate during its lifetime and the offspring carry the shorter neck gene and are once again shorter necked giraffes. Presumably, the long neck giraffes are still doing fine and living and procreating, meanwhile this new version of a giraffe, the short neck version, will be struggling to survive. It could be that the shorter neck is such a lousy fit to the environment that eventually all those with the mutated gene die off and any of their procreated offspring die off too. No more shorter neck giraffes, until or if another mutated gene randomness reoccurs.

So far, so good, in terms of conforming to what Darwin’s theories expound.

Somehow, let’s pretend that the acacia tree suddenly stops producing leaves high-up and instead only does so nearer the lower portions of the tree.

The environment has changed!

Now, the longer necked giraffes find themselves in a bit of a pickle. They need to dip further down and try to eat those luscious leaves. But, imagine that it is very hard for them to do so. Furthermore, in the act of bending down like this, they no longer keep their eye on predators. This is a double whammy for the long-necked giraffes. They are having difficulty getting sufficient food for survival, plus, predators now are able to sneak-up more so on them and cull the herds of giraffes.

Meanwhile, let’s go ahead and revisit our random mutated gene that produces short necked giraffes. A short-necked giraffe is born based on the randomly mutated neck gene. It is well suited to eat the leaves lower down on the acacia tree. It is more well suited to see predators, at least now more than the bending over long-necked giraffes. The short-necked giraffe is more likely to live its lifetime and procreate, and the offspring will enjoy the same kind of advantage in this changed environment.

Eventually, presumably inexorably, the long-necked giraffes are going to thin out and die off, while the short-necked giraffes will be a better fit to the environmental change that occurred and thrive.

As an aside, let’s all agree that this is a rather simplistic view of evolutionary aspects since we might be more likely to have a multitude of environmental changes taking place simultaneously, all of which can both aid and possibly undermine the status of giraffes (both long-necked and short-necked) in various ways, plus we might also expect that other kinds of mutations are randomly occurring that can hinder or help survival (maybe long-legs versus short-legs, maybe shape eyes versus less-focused eyes, and so on).

In any case, here’s a question for you to ponder: Can a long-necked giraffe within its own lifetime suddenly “mutate” into becoming a short-necked giraffe in order to better fit to this changed environment about the nature of the acacia trees?

I’d wager that most if not all of us would assert that the long-necked giraffe cannot suddenly and spontaneously mutate during its own lifetime. It is stuck with the genes that it has. Tough luck. It might produce offspring having a random mutation toward a shorter neck, though this would presumably be purely by random chance and not by something that the adult did to cause it to occur (unless perhaps it mated with the shorter-neck giraffes under some belief this would be a good path to offspring survival or maybe by simply being attracted to the now hunger-free shorter necked blossoming giraffes). The adult long-neck though is doomed to live a life of a long-neck and might as well party to the very bitter end.

What has caused a bit of a stir in the standard Darwin theory is that there seem to be some animals that defy the “you cannot change in your lifetime” provision. In a particular species of toads, the spadefoot toad, when they produce their itty-bitty tadpoles, apparently the offspring tend toward eating algae, they are calm and mild mannered tadpoles, and are small-jawed. It is reported that if the water body the tadpoles are in contains let’s say fairy shrimp, some of the tadpoles “transform” into aggressively devouring carnivores and display bulging jaws along with a fierce demeanor.

So, when the environment is the normal and expected calm pool of water and there is nothing carnivorous to eat, the tadpoles are relatively docile algae eaters. If instead the water contains large crustaceans such as shrimps, a change in their normal environment, some of those same tadpoles become intense meat eaters that will take on any comers, which gives them an added advantage in that environment.

It would almost be as though a long-necked giraffe could suddenly transform into a short-necked giraffe, during its lifetime, in order to adjust to the changed environment about the acacia trees. Doing so would presumably make it a better fit to the changed environment. This would in turn give it better odds of survival. If this same aspect was innate in the transformational giraffe, it could pass it along to its offspring which then would also be better suited to the changed environment.

Plasticity-First Form of Evolution Comes Into Play

One explanation about the transforming tadpoles and other such creatures has been the suggestion that there might be a plasticity element involved in this. The plasticity theory keeps Darwin’s theory intact. Some are referring to the “discovery” or more like the scientific realization and emergence of plasticity as a sign that maybe there is a plasticity-first form of evolution.

Let’s consider how plasticity comes to play.

Suppose that some of the long-necked giraffes have a hidden trait that they’ve not yet had cause to consider using. The hidden trait is that they can bend their necks down relatively easily and do so while still keeping their eyes up and able to spot predators. The more traditional long-necked giraffes don’t have this innate trait.

All of the long-necked giraffes lived together in harmony and did not realize that some of them had this bending neck capability that was baked into their genetics and could be used during their lifetime, if they wished to do so. Let’s assume there was no outwardly sign that some of the giraffes had this hidden trait. The special trait giraffes blended in naturally with the rest of their long-necked friends and colleagues.

When the environment changes, involving the acacia trees leaves, all of a sudden, the long-necked giraffes that have this hidden trait are able to immediately and readily adjust to the environmental change. From an observer’s perspective, we might think that some of these giraffes have magically “transformed” nearly overnight, doing so in the midst of their own lifetime. Instead, what’s really happened is that there were some giraffes that happened to have this otherwise hidden trait and now there was value in them employing it for their survival and giving us as humans the perchance to witness it.

This is one possible explanation for the tadpoles too. Perhaps they have a dominant trait built-in of being polite and vegans, but they also have a hidden trait of being fierce carnivores when needed. Upon experiencing an environment for which the hidden trait has value, some of those tadpoles display the hidden trait. For a human observing the tadpoles, it seems strange and unpredictable that some would “transform” in their given lifetime, when in fact it is simply that they’ve been triggered to use a hidden talent that was there all along.

It could be that there are even more such hidden traits in that subset of the long-necked giraffes and the tadpoles. We might just not know those hidden traits are there because we’ve not seen them deployed.

In fact, it could be that the subset of giraffes or tadpoles have not just specific hidden traits of varying kinds, but maybe they have an overarching plasticity trait. The plasticity trait governs their ability to deploy other hidden traits and aids and abets the emergence of those hidden traits.

In that case, the environment can change in a myriad of ways, and yet those giraffes that carry the plasticity trait are going to have better odds of coping with the changed environment, even during a specific lifetime in which the environmental change emerges. This plasticity trait might end-up making them especially fit to survive and also therefore have a solid chance of producing offspring carrying the trait.

We can recast the topic plasticity into another realm, namely the nature of the human brain. The human brain appears to be capable of changing and adapting, doing so in neurobiological ways and also in more abstract cognitive ways. There is a continual effort underway of forming and adapting amongst the synapses that connect the neurons in the brain, which we assume is the brain’s way of reorganizing itself and learning and changing.

For those of you versed in Machine Learning (ML) and Deep Learning (DL), you likely know that right now most of the computational models used for crafting Artificial Neural Networks (ANN or sometimes shortened to just NN) are typically rigid and locked-in once they’ve been initially trained.

You toss a million pictures of cats at a deep learning system and once you are satisfied that it seems to pattern-match relatively well in terms of discerning what a cat looks like, doing so by having adjusted automatically or semi-automatically the number of artificial neurons, the layers, and their connections, you then will tend to deploy that deep learning system “as is” and let it do its cat identification magic.

The finalized or deployed version takes as input a new image that might or might not contain a cat in it and ascertains to some probability that there is a cat in the picture or not in the picture and indicates where the cat seems to be.

In today’s deep learning implementations, it is rare that you would have the deployed artificial neural network change and adapt while it is deployed. You more likely might do a retraining if you believe that the deep learning needs further depth or refinement. This would be done in a controlled setting usually, and not in a live environment.

If we are all ultimately aiming to have “true” deep learning and do so by properly modelling and mimicking how the human brain really works, it would seem like we ought to be building into our Machine Learning and our artificial neural networks the plasticity capability that real brains seem to have. In the real-world, the brain is continually changing and adapting, and so should our deep learning models.

For more about deep learning, see my article: https://aitrends.com/ai-insider/imitation-deep-learning-technique-self-driving-cars/

For the notion of possibly starting over with AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

For the topic of the singularity, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the Turing test and how we’ll know if we’ve achieved intelligent systems, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that we are building into our AI systems is a form of DL neuronal plasticity. We believe it is essential as an element for advancing AI and likewise ML and deep learning capabilities of computing.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of plasticity, consider for a moment that by-and-large the auto makers and tech firms are currently making use of Machine Learning and DL for AI self-driving cars in a rather narrow portion of the “stack” or spectrum of driving tasks that need to be performed by the AI system.

Less Effort Going into Use of ML and DL for Sensor Fusion

In terms of a driving tasks stack, by-and-large today’s use of ML in self-driving cars is primarily focused at the sensors level of the AI self-driving car automation. There is much less effort underway in terms of using ML and DL for the sensor fusion portion and even less so for the AI action planning and virtual world model updating and analysis.

This initial preoccupation with the sensory data makes sense. The multitude of sensors and their data capture provides an exquisitely rich source of voluminous data and it is relatively easy to come by. Furthermore, vast swaths of data is customarily needed to best make use of today’s ML and DL capabilities, it is their lifeblood, so to speak. For example, feed a ton of images of street signs into a convolutional neural network and you are ultimately presumably going to be able to have a handy and relatively accurate visualization detector of street signs when an AI self-driving car is on-the-road.

Human drivers are particularly adept at visually scanning the surroundings of a car and being able to detect and decipher what they see. Those trees over there aren’t important, but that parked car that appears to be pulling out into the street is important. Those pedestrians standing at the curb and waiting to cross the street, they are important, but that dog on a leash that is tied-up to the bike rack near the front door of that store is not important. By importance, I mean to suggest that the driver is able to discern what those various objects are, and whether or not they pertain to the driving task at-hand.

Numerous efforts are taking place at improving the ability to use ML and DL to examine visual images that are captured via the camera and video recording devices on AI self-driving cars. Likewise, via the use of ML and DL, patterns can be found in the radar collected data, the LIDAR collected data, the ultrasonic collected data, and other such data sources. An AI self-driving car needs to figure out what is surrounding the car and then make use of that informed “awareness” to decide what actions the self-driving car should undertake.

A self-driving car that cannot detect its surroundings adequately is going to fail. Didn’t notice that pedestrian crossing the street in front of the self-driving car, bam, down goes the pedestrian. Didn’t detect that car up ahead that is veering into the lane of the self-driving car, crash, the two cars hit each other. Fundamentally, the AI system needs to have sufficient sensory capabilities to figure out what objects are nearby and where those objects are, along with where they might be going.

It takes though a lot more than just seeing or detecting something to be able to drive a car.

Even if you see the pedestrian crossing the street, you need to put two-plus-two together and realize that there is a chance that the path of the car is going to intersect with the pedestrian, and the car will end-up harming the person. Upon that realization, you then need to try and decide what to do. Should you slow down? Should you swerve away from the pedestrian? Radically hit the brakes? Maybe speed-up?

The AI action planning portion of the driving task is when the driving behavior becomes sacrosanct.

The sensors have provided their data and the sensor interpretations indicate what objects are out there. The sensor fusion has tried to meld together the sensor data and interpretations into a consistent overall indication of the surroundings. The virtual world model indicates the surroundings, the objects, and the speed and direction and other aspects of those objects. It is now up to the AI action planner element to do what human drivers seem to be able to do, assess the situation and decide what next action is best for the driving of the car.

Action Planner Functions Today Are Rudimentary

For modeling of human driving behavior, most of the auto makers and tech firms have to-date been using a rather rudimentary and programmatic approach to having the AI action planner perform its function. They have crudely been programming the more simplistic aspects of human driving decisions into the AI system. If there is a pedestrian in the road up ahead, and if the self-driving car is going to intersect, first calculate if the self-driving car can stop in time. If stopping in time is not feasible then consider a swerving action. And so on.

The AI action planner element:

  •         Currently tends to be rigid and programmatically depicted, rather than being fluid and based on Machine Learning or Deep Learning aspects derived from human driver behaviors,
  •         Generally, tends to be based on simplistic hard-coded rules by the AI-developers about how driving is supposed to happen versus based on real-world data of how drivers actually drive
  •         Will be a key and severe limitation or constraint toward achieving true Level 5 self-driving cars since it will inhibit or undermine the AI to be able to step-up to the myriad of innumerable ill-defined driving situations that will be encountered on public roadways.

Our AI development effort involves using a repository of driving behavior templates, traits as it were, which are based on human driving experiences, and as pattern-matched via the use of Machine Learning and Deep Learning.

In essence, apply the same kind of ML/DL techniques to the detection of objects in the sensory data, but use it for the formulation of driving behaviors based on voluminous driving behavior data rather than sensory images data, and then apply those driving behavior traits to roadway situations as they arise while driving the car.

In addition, this use of ML and DL is not just as a pre-training and pre-deployment kind of operation. Instead, the ML and DL continues while the AI is driving the self-driving car. Learning on the fly is considered an equally valid avenue of learning. Admittedly, in the case of driving a car, some rather significant “guardrails” need to be embodied into the AI system to prevent it from learning “the wrong thing” and taking an untoward driving action accordingly.

Humans of course continue to learn about driving when they are driving a car.

Each time you get behind the wheel, there is an opportunity to learn something new about driving. That being said, I realize that most of us as seasoned-drivers have driven sufficiently that it becomes less and less likely that we’ll learn something new about driving when we get on the road. The already robust base of experience at driving becomes extensive enough that most of the daily driving situations that arise have all been seen before, and our minds already learnt how to cope with the situation.

There is a plasticity in your driving behavior, which makes sense when you contemplate the matter.

When you start to drive as a novice in your teenage years, you have a great deal of plasticity since you are rapidly trying to absorb a swirl of driving tactics and strategies, along with devising tactics and strategies that aren’t otherwise already brought to your attention. You are like a nearly empty mental vessel about driving when you first learn to drive, though you certainly already have a great deal of supporting richness of knowledge such as how streets work, how pedestrians work, how cars go, etc. I mention this because I don’t want to imply that you are empty-headed when you learn to drive – there’s plenty of important stuff that’s already in your noggin.

There is “supervised” leaning in which someone explains to you a driving tactic or strategy, such as a driving instructor or perhaps a caring parent that is helping you learn to drive. And there is “unsupervised” learning that involves your own efforts to glean what is happening as you drive, and not only cope with the moment, but also turn the moment into a permanent member of your driving behavior (as a newly formed or revised trait or template) that will become part of your overall mental repository of driving templates or traits.

For my article about Machine Learning core aspects, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For the importance of explanation-based Machine Learning, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

Let’s consider two use cases. The first will involve a novice teenage driver. The second use case will involve a seasoned driver.

I was helping my teenage children learn to drive, which is both an honor and somewhat scary. You realize rather quickly that there is little you can do from the front passenger seat if your offspring happens to make a wrong move while driving the car.

When I first learned to drive, my high school had specially equipped cars that had dual controls, one for the teenager at the driver’s wheel and another set of controls for the driving instructor sitting in the front passenger seat. Everyone going to the high school was able to take a beginner’s driving course. This made things somewhat easier for parents at the time.

In terms of the driving instructor, I’m not suggesting that the dual controls made life any easier for that teacher, since I can only imagine what his or her life must have been like to work with teenagers all day long in a car that can get into life-or-death predicaments, regardless of the instructor also having access to the driving controls. Forever bless those instructors!

Anyway, after having practiced on local streets with my children driving, it seemed time to try using a freeway. Up until that moment, the fastest we had the car going was maybe 45-50 miles per hour. Now, once we got onto the freeway, it would be more like 60-70 miles per hour. A lot faster than 40-50 mph, even though I realize you might argue it is only “a few mph faster” (it is exponentially higher, on a frightening perceptual scale, I assert).  There’s a lot less time to take needed actions. A lot higher chance of things going awry. Fatherly love made me take the chance.

When they reached the on-ramp, they each drove up the ramp and tried to enter into the freeway traffic at the top speed they had already gotten used to, namely the 45-50 miles per hour. I had chosen a time of day when there wasn’t much traffic on the freeway so that we’d be able to drive along steadily and not simply be mired in the usual Southern California bumper-to-bumper snarl. As such, the prevailing traffic was easily doing 65 to perhaps 75 miles per hour (yes, those higher speeds exceed the legal speed limit, but the speed limit is considered more of a suggestion than an imperative here).

I realized immediately that we were going to enter into traffic at a much lower speed than the prevailing traffic. I’m sure you’ve done this before or seen it done by others. The driving problem this creates is that you might end-up merging in front of cars that will have to pump their brakes to keep from ramming into you, or you might cause other cars to have to do a dance trying to get away from the slower going car, all of which could cause a cascade of crashes.

I urged that they push down hard on the accelerator pedal and give us a flash of speed to try and match the prevailing traffic speed. I’m sure that some teenagers would love to do this, willingly and gladly putting the pedal to the floor. My children were more conservative and cautious, thankfully so, and I had to really emphasize the need for speed. Fortunately, we made it okay and nothing untoward happened.

The story might end there, except for the valuable insight it provides about driving behavior and the learning of driving tactics and strategies.

Young Drivers Adapt to Speed-Matching on LA Freeway Ramps

Shortly after that one incident, we ended-up in other situations whereby the need to match the speed of prevailing traffic arose. For example, as they tried to make it to the desired exit ramp, they were in a faster lane and had to slightly decrease their speed to match the cars in the slower lane that led to the exit ramp. I could see them concentrating on what to do and then adjusting their speed accordingly. When we got off the freeway, the off-ramp was a fast turn directly into a busy highway, and they once again had a look of concentration and matched their speed to the prevailing traffic.

They each had adapted to the “new” environmental conditions that involved as a potential “solution” a speed-matching approach (the word “new” in this case refers to their first time driving on a freeway and at predominant high speeds).

Based on the one instance of coming onto the freeway, they had each crafted on-their-own a mental template or trait that imbued them with the driving tactic that when the circumstances warranted it, they considered a “matching the speed” maneuver. Notice that I had not said to them “whenever the situation arises, such as getting onto the freeway or getting off the freeway, adjust your speed to the prevailing traffic.” They devised this notion on their own, merely by my impetus to them to speed-up at the first occasion.

You could say that they learned in a somewhat supervisory fashion, since I did give them a tip or hint and it was presumably my nudge that started them toward the tactic.

It is also interesting that they could have gained a narrower lesson learned in that suppose their thought was that if you need to go faster then go faster. In the aspect of trying to later on get to the exit ramp, they had to actually go slower to match to the slower moving traffic. If the hard-coded rule was go faster, it would not have lent itself to the broader notion of matching the prevailing speed.

These human drivers learned an important driving behavior, which I’m sure became part of their overall driving lexicon.

Did they have to drive a thousand times on thousands of on-ramps to derive the lesson learned? No. I mention this because the prevailing approach to Machine Learning and Deep Learning requires humungous volumes of data. Presumably, the only way a conventional ML or DL could have devised the match-the-speed template or trait would be to have had thousands or maybe hundreds of thousands of traffic flows data to try and pattern onto.

We don’t think that’s needed for doing driving behavior adaptability for an AI system. It helps to have such data, but it isn’t a prerequisite and nor is it the only way to learn.

One thing the kids did have was plasticity. They came onto that on-ramp with a limited set of prior driving experiences. They had to be prepared to change, in the sense of perhaps learning something new or adjusting things that they had earlier learned. They were being confronted with a new environment, a new driving environment from their perspective. It would require honing new driving skills to survive. And, they needed to do so in real-time, in the real-world, in a situation involving real cars and real life-or-death matters at-hand. Adapt or die, I suppose one might say.

The next use case involves a seasoned driver. Me. I’m going to describe it rather briefly here since I’ve already extensively covered the use case in my other writings.

See my article about prevalence-based driving behavior: https://aitrends.com/selfdrivingcars/prevalence-induced-behavior-and-ai-self-driving-cars/

For my article about defensive driving behaviors, see: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For the role of greed in driving behaviors, see my article: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

For rationality and irrationality in driving behavior, see my article: https://aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/

As a seasoned driver, there is not much that I could likely learn anew about driving, though there are always those moments whereby a driving tactic or strategy can be further refined or extended.

You never know when you might get a chance to learn something new for your driving repertoire. Some seasoned drivers that I know have never driven in snow, and thus upon their first encounter with trying to drive a car on snow, they might rediscover the joys of learning something new (to them) about the driving task.

In any case, on my daily commute to work, I drive in the hustle and bustle of Southern California traffic.

Here, especially it seems, everyone wants to get to where they are going in the fastest possible way. For some drivers, they believe that by riding the bumper of the car ahead of them, it is going to magically make things go faster. I’ve debunked this notion overall by examining traffic data and simulations and analyzing it to showcase that this driving tactic not only at times will not work as intended, it can backfire and make traffic go slower, causing at times for the driver to take even longer to get to where they are going. They ironically worsen traffic and make it go slower, in spite of their (false) belief that they are going to speed things up.

Nonetheless, the average pushy driver thinks (rightly or wrongly) that they will get traffic to go faster if they “push” the car ahead of them by coming right up to the back of the car and motivate the driver therein to go faster (or, presumably, get that driver out of the way so that the “faster” driver behind them can get further ahead).

I am accustomed to this driving behavior.

So much so that I anticipate it. I know that a high percentage of drivers here in Los Angeles are going to ride on my tail. No matter what speed I might be going, even if going over the speed limit, these other speed demons are going to go to the bumper. Unfortunately, this kind of driving behavior can have adverse consequences. For example, the driver being tailed now has to be watchful of trying to use their brakes, since the car behind them has little buffer distance to also slow down or stop.

I realize that some drivers figure that if the driver behind them is stupid and doesn’t allocate enough buffer distance, it is the fault of that driver and nothing else is to be done. For me, and for any truly defensive oriented driver, it is crucial to not simply let other “dumber” drivers dictate our options, but it is best to consider how to drive in a manner that takes into account those other drivers and their driving foibles.

For driver foibles, see my article: https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For the tit-for-tat of human drivers, see my article: https://aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/

For driving styles, see my article: https://aitrends.com/selfdrivingcars/driving-styles-and-ai-self-driving-cars/

For my article about road rage in human drivers, see: https://aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/

After years of my adapting to this driving environment of pushy drivers that constantly are riding on the bumpers of other cars, it had become ingrained in my driving style. My adaptations included numerous driving tactics. For example, you can avoid a pushy driver by potentially spotting them in your rearview mirror long before they get behind your car, in which case, you can then get into a position that will likely preclude them from getting directly behind you, if you plan out the movement of nearby cars and the maneuvering of your car in a chess-like way. And so on.

What makes this driving behavior template or trait of interest herein is that when I recently took a vacation and went to a location that did not have these same kinds of pushy drivers (or, had them but to a much lesser degree), my driving continued as though I was still in the same environment. Each car that I saw coming along, my assumption was that this was most likely a pushy driver, regardless of how they were actually driving, and I silently and subliminally was invoking my pushy-driver control tactics.

This aspect that I fell into is a mental trap known as prevalence-induced behavior.

Conclusion –  Aim for Artificial Neuroplasticity

I’ll tie together the giraffes and the tadpoles with the aspects of driving and driving behaviors. They all interrelate by the matter of considering what kinds of traits we have, some of which are innate, some of which are learned, along with the plasticity of being able to change and adapt to our environment. If Darwin were still here, I’m sure he’d be interested in this topic too.

To further advance AI, I’d wager that we’ll need to make progress on Machine Learning and Deep Learning that will incorporate plasticity. We need to be able to construct artificial neural networks that can change and adapt and adjust as the environment changes, in real-time, in a real-world context, and essentially on-their-own as we’ve hopefully imbued them with the capabilities to do so.

In that sense, we should all be aiming to have artificial neuroplasticity, which, since real neuroplasticity occurs in the brain, we likely will need to do something likewise in the computer if we are going to reach AI brain-like capabilities.

For driving purposes, the AI action planning is where the crux of driving and driving behaviors resides. Being able to see and sense the driving environment provides the so-called table stakes for playing the self-driving AI game, but to really succeed in AI self-driving cars will require the AI to be able to drive with driving behaviors, ones that are honed and pre-tuned, and others that will arise as the driving situation emerges and the driving environment changes (as perceived by the AI).

If those tadpoles have the ability to change how they act and look, doing so after sensing the environmental conditions that warrant a change, and presumably bringing forth some kind of latent traits that can be triggered and showcase the plasticity of these toads, I’m voting that we can do the same kind of thing with driver behavior templates and traits, for which the AI self-driving car would use and refine, based on the driving environment and the plasticity that we’ve built into the AI. Score one for the humans and let’s show those malleable tadpoles what we can really achieve.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.