By Lance Eliot, the AI Trends Insider
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
“Their hands did it.”
That’s what my children told me when they were quite young and had managed to put their hands onto wet paint.
We had been taking a leisurely stroll in our quiet neighborhood and a homeowner had opted to paint his wooden picket fence that bordered his property. There was a sign that clearly said wet paint that had been posted on the fence. As we got near the property, I asked the kids what the sign said. They both knew just enough about how to read that they were able to decipher the sign and tell me what it indicated.
Case closed, or so I thought.
I had wandered just slightly ahead of the kids and assumed that they were walking behind me, straight as an arrow, and I took no qualm that they might decide to put their hands onto that freshly painted picket fence. When they caught up with me, I happened to see them both hiding their arms and hands.
What’s up, I asked, not having yet put two and two together, so to speak.
They knew that I would be a bit perturbed about their getting paint onto their hands and so they first tried the hideaway technique.
Since I had now asked to essentially see their hands, they were somewhat stuck in terms of what to do. One of them showed me the paint on their hands and claimed that the posted sign did not say “Don’t Touch” and only said wet paint was there.
How were they supposed to know not to touch it?
Though I appreciated the clever semantics lesson, it didn’t cut the mustard with me.
When I gave them my classic evil eye, the other one tried an entirely different approach.
Our hands did it, I was curtly informed.
How’s that, I inquired?
Well, both then opted to chime in simultaneously now and professed that it was an act performed by their hands.
They had no control over their hands.
Somehow, magically, mysteriously, their hands had decided on their own to touch that paint, without consulting with the rest of their bodies and minds, and that’s how it happened.
I suppose that I could have played along and said that therefore their hands would suffer the consequences, but I figured this whole transgression was rather mild and probably best to let it go as a lesson about not touching wet paint. They hadn’t ever touched wet paint like this before, as far as I knew. Sure, they had used paints in school and at our home they painted quite a bit. I don’t recall them ever getting paint onto their hands at any other location.
When we got home, I asked them if they had regained sufficient control of their hands that they could go wash off the paint from their hands.
They nodded their heads in sheepish agreement that somehow they once again were in direct control of their hands.
Off they went, rubbing the paint of their skin and washing their hands.
It’s a funny story now and one that I remember vividly, while they today as young adults don’t seem to remember it at all.
The Alien Limb Syndrome
What makes the story particularly notable too is that they unknowingly landed on an actual aliment that exists.
There is an actual documented phenomena of people that are unable to control their limbs.
It is typically referred to as the alien limb syndrome.
My kids didn’t actually have it. I’ll say this, if they could have quoted me the name of the ailment and said it was alien limb syndrome, I would have likely not only considered the wet paint a non-issue but would have gotten them ice cream as a reward for knowing a rather obscure malady at their exceedingly young age.
For those of you that are movie buffs, you might remember that in the movie Dr. Strangelove the main fictional character is unable to control his arm and hand, and he flails them uncontrollably around at times, making the character seem grotesque and befitting with the role. We ought to not consider the movie as any kind of supporter for considering this as a serious ailment and a medical condition that we should give careful and due consideration for.
Some people even refer to the alien limb syndrome as the Dr. Strangelove syndrome.
There are also some that refer to this aliment as the alien hand syndrome, though the condition can impact arms, legs, feet, and essentially all of the limbs.
It is not exclusively for the hands, though the hand as the focus does seem to be the more popular affected limb (for those of us left handed, it also seems to be primarily the left hand!).
The uncontrollable limb movements can at times be quite subtle.
There are online videos that you can watch and show someone afflicted with alien limb syndrome that suddenly buttons their sweater. They do so without apparently wanting to do so. The person claims they did not have a thought in their head about buttoning their sweater. Their arms and hands just decided to do so.
Moments later, after having buttoned up, their arms and hands proceed to unbutton the sweater. No rhyme, nor reason, appears to have prompted it.
Is the person trying to fool with us?
Maybe they really did have thoughts about buttoning and unbuttoning their sweater.
Perhaps they want us to belief they didn’t think it to happen.
If we put aside someone that is purposely trying to scam us, I’d say that it does seem realistic that the person truly believed they did not actively invoke their brain to do the buttoning and unbuttoning act.
Brain And Body Entanglement
Of course, we can’t know for sure what is happening in the person’s brain.
Maybe a part of their brain told their arms and hands to do the act, while another part of the brain was unaware that the other part was acting to do so. It could be that the person is only aware of the part of the brain that was unaware and so they tell us that their brains did not command their arms and hands.
Some assert that alien limb syndrome is a disentanglement of the mind and the body.
The limbs genuinely are acting on their own. The mind is not involved at all.
There tends to be less credence to support this notion.
Some say that the disentanglement is within the mind, causing parts of the mind to become disentangled, such as the part that controls the motor functions of the limbs and the part that does action planning for the body.
Another intriguing element of the alien limb syndrome is that sometimes one limb will seemingly try to purposely counteract the other limb.
This commonly occurs when one disobedient limb tries to do something and another disobedient limb tries to then intervene.
Let’s say the left arm and left hand are wayward. The left arm and left hand start to button up the sweater. The right arm and right hand might suddenly come up to the left arm and left hand and attempt to stop the buttoning process, even though they too are seemingly uncontrolled by the person. Or, once the buttoning has been completed, the right arm and right hand might immediately unbutton the sweater, rather than trying to directly fight with the other mind-of-its-own limb.
Imagine for a moment that one or more of your limbs exhibited this alien limb syndrome.
I’d wager that it would certainly freak you out.
We are all accustomed to the idea that we control our limbs. There are times that we suddenly become aware of our limbs as somewhat distinct appendages, such as if you fall asleep on your arm and hand, and it begins to tingle, doing so on its own. You’ve perhaps flapped your sleeping hand and arm to get it to awaken and felt like your limb was a limp noodle that you had no real control over.
The people that get alien limb syndrome are likely to gradually get somewhat accustomed to the matter, though it is not an easy thing to deal with.
The person will at times speak to their alien limb and try to talk it into submission. They might even give a name to the limb, as though it has its own personality. They can often tell you generally when their limb is going to act up, having dealt with it for a while and know the kinds of acts that the limb tries to do on its own.
Dangers Associated With The Alien Limb
There are obviously dangers involved in having the alien limb syndrome.
Suppose your limb acts up when you least want it to do so.
Maybe you are holding a pair of scissors and all of a sudden the disobedient limb opts to strike you or someone else. The person with the alien limb syndrome would say they had no control over their limbs and it wasn’t their doing per se.
If you are interested in the alien limb syndrome, there are lots of fascinating studies trying to pin down what causes it and what can be done about it. In a recent study done at Vanderbilt University, researchers seemed to trace the aliment to connections in the brain involving the precuneus. The precuneus is often considered the part of the brain that provides our sense of free will and what is coined our agency.
One key aspect of the study was that there didn’t seem to be one specific area of the brain that could be considered the culprit for the syndrome.
Some have been hoping that the matter is isolated to a particular spot of the brain and thus it would presumably be easier to detect and resolve. This recent study suggests it is more broadly based and involves a network of regions of the brain. Generally, it seems to be the case that whenever neurologists and others that study the brain are hoping to pinpoint the brain on some matter, it usually becomes more complex and seemingly is distributed throughout the brain. No single silver bullet, so to speak.
Comparing Alien Limb To Computer Systems
As a seasoned AI developer and software engineer, I’ve had situations involving computer systems that in some analogous way appeared to have been overtaken by an alien limb syndrome.
I remember one time that I was involved in creating a rather complex piece of software that had lots of components.
Some of the software routines had been found in open source libraries. One of the routines was purportedly built to calculate multidimensional scales for doing pattern matching. A member of the software team tried playing with it and said it would do what we needed to have done. It then got included into our overall build of the system.
Things worked fine for a while.
One day, we received a complaint that our system was trying to access files that were outside the scope of the system. Which element of the hundreds of components was the culprit? At first, nobody knew and all the members of the software team claimed it could not be any of their components.
For various reasons, we had a hard time finding the culprit.
We could not readily replicate the problem and we didn’t have much in the way of clues from the complaint that had been registered. Some of the team felt like we were barking up the wrong tree and that it must be something occurring outside of our system, rather than something inside of our system.
As you might guess, the darned thing happened again and we got a new complaint about a file being accessed that should not have been.
Once might be a fluke, twice seems to suggest a rogue element that won’t just disappear on its own.
After turning over all possible rocks and stones, we eventually narrowed the matter to the open source routine that we had used.
Sure enough, hidden deep inside it, we found a few lines of code that did not belong there. It didn’t seem to be malicious and maybe was leftover from some other functionality that the original developers had in mind to include but had later tried to excise it, doing so incompletely.
In a manner of speaking, we had an alien limb syndrome.
One of the “limbs” of the core system had gone alien on us. The core system wasn’t doing it. The limb was acting on its own. We did “surgery” on the limb and put in back into proper operation. Perhaps someday the same kind of action can be taken for humans that experience alien limb syndrome. Let’s hope so.
AI Autonomous Cars And Alien Limb Syndrome
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that AI developers generally should be doing is building their systems to catch and prevent an alien limb syndrome from overtaking the rest of their AI system.
This is especially crucial in a real-time system and really especially so in a real-time system that controls a self-driving car — there can be serious life-or-death consequences for an “alien limb” acting up in an AI self-driving car.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the topic of alien limb syndrome in complex computer-based systems, savvy AI developers need to be on their toes and build their AI systems to cope with alien limbs that might act up.
Sensors Go Awry As Alien Limbs
Let’s start with the potential of the sensors to become one or more alien limbs.
Suppose one of the cameras on an AI self-driving car starts to go rogue.
Rather than providing images at a particular pace as established by the AI system, the camera instead begins to generate tons of images. Maybe it opts to also provide them on a seeming whim, doing so intermittently or perhaps even incessantly. This could lead to either a flood of images that the AI system was not anticipating or a dearth of images and cause a kind of visual starvation.
If the rest of the AI system is not prepared in-advance to handle this kind of alien limb activity, it could cause quite a problem.
The AI effort to make sense of the images as to whether there is a car ahead or a pedestrian in the way might be marred by getting the images on such an unexpected basis. If the AI is fooled into believing what the camera is providing, it could lead to an internal cascading set of errors and confusion.
For example, during sensor fusion, if we assume that say the radar and LIDAR are still functioning properly, there will now be a potential contention between what the camera indicates and what those other sensors indicate. Which of the sensors is to be believed by the AI? If the AI falsely assumes that the camera is correct, it could attempt to override the radar and LIDAR, or might give the radar and LIDAR less weight in trying to ascertain the surroundings.
Virtual Model And Alien Limb
Suppose that the miscues then were passed along to the updates of the virtual world model.
Maybe the virtual world model places a marker that there is a pedestrian on the sidewalk when the actual fact is the pedestrian is standing in the street.
When the AI action planning kicks in, it will inspect the virtual world model and falsely get an indication that there isn’t a pedestrian in the way. The AI action planning might opt to issue car commands that tell the self-driving car to continue forward at the ongoing speed, even though the self-driving car is now moving closer and closer to hitting the pedestrian that’s in the street.
I realize that some AI developers might complain about the aforementioned scenario and would assert that even the simplest of sensors on an AI self-driving car are bound to have some kind of built-in error checking. Thus, those AI developers would say that certainly the sensor itself would be reporting errors and this would give the sensor data collector and the sensor fusion a heads-up that the camera is defective.
Though this is indeed the case that the sensors are likely to have error detection, I’d like to also point out that whether or not the error detection can detect rogue behavior is another kind of matter.
Tricky Aspects Of Alien Limb Of A Self-Driving Car
In other words, the error detection for most sensors would be that the camera is not working at all, maybe due to having encountered an outright hardware failure or maybe it got smacked by a piece of debris that flew up from the street and cracked or broke the camera.
In the alien limb syndrome notion, I’m saying that let’s assume the camera is otherwise working just fine and it is now working on its own and not necessarily at the command of the rest of the AI system.
In that manner, the error detection by the sensor itself might not even realize that the sensor has gone rogue.
The usual error detection involves that the sensor has blurry images or no images, while I’m suggesting in the alien limb manner the images are overall fine. This is the same as when a human with alien limb syndrome suddenly has their arm and hand act up, namely that the arm and hand are working as an arm and a hand and there is nothing wrong with those appendages per se (i.e., the arm still extends, the hand still grasps), and they function as an arm and a hand are expected.
It is imperative that the AI be prepared for realizing that a sensor, any of the sensors, might suddenly go rogue.
This is a tough aspect to figure out because let’s assume that the sensor is still fully functioning in terms of whatever the sensor is capable of doing. If it is a camera, we are still getting good camera images. If it is a radar, we are still getting solid radar returns. The problem is that this sensor is doing its sensory acts whenever it opts to do them, rather than upon the command of the AI system.
It might involve other functionality of the sensor too.
For example, suppose the camera can be automatically adjusted to focus on nearby objects or far away objects. Let’s suppose that the AI has recently set the camera on the far away focus. During the rogue action, the camera might on its own switch into the nearby focus and provide those images. The AI was expecting far away images and meanwhile it suddenly gets nearby images. Would the AI be able to detect this? That’s the million-dollar question, as they say.
The other complication is that the rogue act might be fleeting rather than consistent.
In the case of humans, the aspect of suddenly buttoning a sweater can occur out-of-the-blue. It might happen and then not happen again for a long time. Or, it might happen and then happen again, and again, and again.
The case of the consistently being rogue is probably going to be easier for the AI system to realize that something is amiss. Something that happens intermittently is likely to be more challenging to discern. It is akin to my earlier story about the open source subroutine that we had used in our system and it croaked once and then get silent after that, otherwise working as it was supposed to do. Those kinds of oddball acts are often more difficult to ferret out.
Detecting An Alien Limb Scenario In Real-Time
Detecting the alien limb is crucial.
Having a means to deal with the detected rogue acts is equally crucial.
If a flood of images are pouring in, the AI would need to ascertain which images to keep and which to potentially discard. You might be thinking that shouldn’t it analyze all of the images received? Well, keep in mind that as a real-time system it takes time for the system to analyze each image, and it could be that the system will fall behind if it merely opts to analyze every image being flooded into the system.
Imagine that the flood caused a backlog of the image analyzer.
Meanwhile, the self-driving car is still moving ahead.
The delay in the sensor analysis of the backlog might mean that a few crucial seconds are lost that might have made the difference in terms of the AI action planner realizing that the self-driving car is going ram into a pedestrian or another car.
In short, there is a chance that any of the sensors might become an alien limb. This could happen to just one of the sensors or it could occur to more than one. It could be that one sensor acts up and then seems to settle down, or it could be that several act up at once (many limbs). I mention these multitude of variations because the AI system cannot just assume that if there is an alien limb it will be neatly confined to just one sensor. That would be too easy. The real-world won’t necessarily make things easy for the AI.
Beyond the sensors, an alien limb can strike other aspects of the AI self-driving car system.
Perhaps the sensor fusion goes rogue.
Maybe the virtual world model goes rogue and starts populating the model with all sorts of markers that aren’t based on what the sensors and sensor fusion have reported.
The AI action planner itself might go rogue.
Generally, the deeper within the AI system that the alien limb strikes, the harder it will be to ascertain and deal with.
For the cognitive timing aspects, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
For dealing with sensors, see my article: https://aitrends.com/selfdrivingcars/cyclops-approach-ai-self-driving-cars-myopic/
For how arguing machines can help with alien limbs, see my article: https://aitrends.com/features/ai-arguing-machines-and-ai-self-driving-cars/
For ghosts or errors in AI self-driving cars, see my article: https://aitrends.com/ai-insider/ghosts-in-ai-self-driving-cars/
Preparing For Alien Limb Instances
I’ve mentioned that the AI system needs to be prepared in-advance for alien limb syndromes that might arise.
Some AI developers might balk at this notion that they need to develop the AI system to cope with rogue behavior and offer instead that the AI system should by itself be able to deal with alien limbs.
As an analogy to a human, do we need to tell a human before they get alien limbs that they should be prepared to deal with alien limbs, or instead might we expect that a human that suddenly has alien limbs will be able to cope with it whenever it suddenly so occurs.
Should we relieve the AI developers of the AI system to not have to deal with the possibility of alien limb syndrome and instead assume or hope that the AI system can somehow figure out the aspect on its own?
I’d vote that since we’re dealing with an AI self-driving car, and since there is a solid chance that the AI self-driving car could cause undue damage or injury, it makes a lot more sense to prepare the AI system beforehand, rather than hope or assume that the AI will somehow miraculously figure out what to do.
This is generally true of humans in that a human that were to suddenly discover they have alien limb syndrome is likely to be startled and not deal with it very well, and presumably once they see a medical specialist it might be more likely they could work toward behaviors to help contend with it.
There is a slim chance that some kind of Machine Learning (ML) or Deep Learning (DL) capability of the AI system for the self-driving car might be able to identify that something is amiss, and maybe gradually figure out that it is rogue behavior. This might though take many iterations and it is usually the case with today’s ML and DL that tons of examples are needed to find patterns.
I don’t think we want the alien limb acts to mount up and instead want to catch them as soon as they arise, thus, waiting for the off-chance that the ML or DL might catch on is not a good strategy for safety purposes.
For more about machine learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/
For the reverse engineering of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/
For my article about the dangers of burnt out AI developers, see: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/
For the egocentric AI developer aspects, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/
Let’s now consider and assume that a rightfully developed AI self-driving car has been crafted to be able to detect and contend with alien limb syndrome (I hope so!).
Further Twists On The Alien Limb
There are some added twists to consider.
One is a false positive effect.
The AI might falsely accuse a capability to be suffering from alien limb, and yet perhaps the capability is actually functioning properly and appropriately. The danger is that if the AI has opted to now perhaps disregard the limb or otherwise treat it as suspect, whether it is a sensor or some other component, the AI is doing so falsely.
Let’s pretend that a key camera has been accused of being an alien limb.
The AI perhaps does something like opting to now ignore whatever the camera provides as data. If the camera is an alien limb, this might be a prudent blockage to prevent flooding and delays on processing of the camera images. But, if the “solution” is merely to ignore the camera, we’ve also now lost the use of a valuable sensor. Furthermore, if the sensor is not actually experiencing an alien limb syndrome, and yet it has been labeled as such by the AI, we are now neglecting the camera needlessly.
The odds are that any alien limb treatment is bound to degrade the limb and not be using it to its normal full potential.
Thus, the AI might be incorrectly now be overriding or wrestling with a component that actually is able to work fine.
The other side of this coin is a false negative indication.
The AI might somehow inspect or assess a capability and determine that it is not suffering from an alien limb syndrome, and yet the capability actually is. In other words, we cannot assume that the AI system is going to necessarily always correctly discern when an alien syndrome is occurring.
I say this and it sometimes surprises AI developers, since they tend to get themselves into the mindset that if they’ve included some kind of alien limb detection, it is going to work flawlessly and all of the time correctly ascertain an alien limb existence.
I don’t think this is real-world thinking.
There are odds that even with a detection purposely built into your AI system, there is a chance that an alien limb might scoot through and the detection will miss catching it.
For my article about fail safe AI, see: https://aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/
For the importance of plasticity in AI, see my article: https://aitrends.com/ai-insider/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/
For safety and AI self-driving cars, see my article: https://aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/
For boundaries issues, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/
There are some AI developers that are oblivious to the alien limb syndrome when it comes to their AI systems.
It is their assumption that the components of their AI system are going to work correctly and that within them is some kind of self-error checking. Therefore, the rest of the AI system does not need to be concerned about the component because the component itself will let the rest of the AI system know when it is not working properly.
As mentioned, the alien limb syndrome is not particularly about the component itself having errors. An internal self-check would come out usually Okay, indicating that the element is still working properly, similarly to how a hand and arm might be working just fine to button or unbutton a sweater. It is more about the invoking of the component and having it do its thing when desired, as desired, rather than the component opting to run or activate at its own choosing.
For an AI self-driving car, any of the “limbs” of the AI system can wreak havoc if it opts to activate whenever it opts to do so.
When I use the word “limb” it tends to bring forth the idea that the sensors of the self-driving car might go rogue, but please realize that as I’ve mentioned herein, the “limb” is a metaphor referring to any of the components of the AI system, including the sensors, sensor fusion, virtual world model, AI action planner, and car controls commands issuance.
Let’s not have an AI self-driving car that suddenly opts to swerve the car unexpectedly or slam on the brakes, doing so in the manner that a human might uncontrollably button or unbutton a sweater, all of which arises due to an alien limb syndrome.
Properly developed AI systems for self-driving cars need to be prepared for detecting and acting upon an alien limb and do so quickly and prior to allowing an alien limb to cause an untoward action.
That’s good “medical advice” for those automakers and tech firms that are developing AI self-driving cars and need a nudge to make sure they are being watchful for a Dr. Strangelove that might arise in their vaunted self-driving cars.
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.