By Lance Eliot, the AI Trends Insider
Perhaps one of the oldest questions asked by humans is whether or not there is free-will. It’s up there with questions such as why we all exist, how did we come to exist, and other such lofty and seemingly intractable queries. If you are anticipating that I’m going to tell you definitively herein whether there is free-will or not, I guess you’ll have to keep reading, and the choice you make will determine the answer to your question.
I’ll pause while you ponder my point.
Okay, let’s get back underway.
Well, since you are now presumably reading these words, I gather that you choose to keep reading. Did you make that choice of your own free-will?
We generally associate free-will with the notion that you are able to act on your own, making your own decisions, and that there isn’t any particular constraint on which way you might go. Things get muddy quite quickly when we begin to dig deeper into the matter.
As I dig into this, please be aware that some people get upset about how to explain the existence of or the lack of free-will, typically because they’ve already come to a conclusion about it, and therefore any discussion on the matter gets things pretty heated up. I’m not intending to get the world riled up herein.
As you’ll see in a few moments, my aim is to layout some of the key landscape elements on the topic, and then bring it into a vantage point that will allow for considering other points I’d like to make about AI and AI systems.
Indulge me as I try to get us there.
Free-Will Or Non-Free-Will: That’s The Question
If I were to suggest that the world is being controlled by a third-party that was unseen and undetected, and that we were all participants in a play being undertaken by this third party, it becomes hard to either prove that you have free-will if you claim you do, or prove that you don’t have free-will.
Were you to contend that you do have free-will, you are actually saying so under a false pretense or belief, since I’ve claimed that you are merely part of a play, and you are unaware of the play taking place and aren’t able to discern the director that is guiding the action. On the other hand, there’s no evidentiary means to prove that you are not exercising free-will, since the director is unseen and the play itself is unknown to us, instead it is merely life as it seemingly unfolds.
You can make this into a combo deal by suggesting that the play is only an outline, and you still have some amount of free-will, as exercised within the confines of the play. The problem though with this viewpoint is that someone else might contend that there is either free-will or there is not free-will, and if you are ultimately still under the auspices of the play, you don’t have true free-will. You have a kind of muted or constrained free-will.
For those that believe in the binary sense of free-will, meaning you either have it entirely and without any reservation and no limits, that’s their version of the binary digit one, and anything else is considered a binary digit of a zero. Therefore, they would reject that you have free-will unless and only if it is utterly unfettered. No gray areas, no fuzzy logic allowed.
Set aside for the moment the third-party notion and consider another perspective.
Maybe everything that seems to be happening is already predetermined, as though it was a script and we are simply carrying out the script. We don’t see the script and don’t realize it is a script. We don’t know how the script came to be, which of course goes along with the idea that we cannot see it and don’t realize it exists.
Somehow, we are making decisions and taking actions that have been already decided. It could be that the script is extensive to the nth degree, covering every word you say, every action you take. Or, it could be a script that has preferred lines and preferred actions, yet you still can do some amount of improvisation.
Once again, proving that you are not abiding by the written script is not feasible, because there isn’t proof of the script and nor that you are acting upon it. In essence, it doesn’t seem likely that under this script milieu we can prove you do or do not have free-will.
Another take on the free-will underpinnings relates to cause and effect.
Perhaps everything that we do is like a link in a very long chain, each link connecting to the next. Any decision you make at this moment is actually bound by the decision that was made moments earlier, which is bound to the one before that, and so on, tracing all that you do back to some origin point. After the original origin point occurred, all else was like a set of dominos, each domino cascading down due to the one that came before it that had moments before fallen down.
The past is the past. The future is not necessarily already written. It could be that this moment, right now, consists of one of those dominos, about to fall, and once it falls, the next thing that happens is as a direct and unyielding result of the falling domino that just fell. In this perspective, the future could be unplanned and open ended, though it is entirely dependent on the decisions and actions that came before.
Some would describe this viewpoint of your steps being either laid out or as being inextricably connected as depicting what might be commonly called fate, typically considered something that has been predetermined and you are in a sense merely carrying it out, an inch at a time. The word destiny is used in a somewhat similar sense, though usually suggested as a target point, rather than the steps in-between, such as it is your destiny to become rich and famous, though how you get there is perhaps not predetermined, yet you will indeed get there.
In the philosophy field, a concept known as determinism (not to be confused with the computer science meaning) is used to suggest that we are bound by this cause-and-effect aspect. You can find some wiggle room to suggest that you might still have free-will under determinism, and so there’s a variant known as hard determinism that closes off that loophole and claims that dovetailing with the cause-and-effect there is no such thing as free-will.
Depending upon which philosopher you happen to meet while getting a cup of java at your local coffee bar, they might be a compatibilistic believer, meaning that both determinism and free-will can co-exist, or they might be an incompatibilistic believer, asserting that if there is determinism then there is no such thing as free-will.
Some are worried that if you deny that free-will exists, it implies that perhaps whatever we do is canned anyway, and so it apparently makes no difference to try and think things through, you could presumably act seemingly arbitrarily. In that case, your arbitrariness is not actually arbitrary, and it is only you thinking that it is, when in fact it has nothing to do with randomness and one way or another it was already predetermined for you. Thus, chuck aside all efforts to try and decide what to do, since the decision was already rendered.
This kind of thinking tends to drive people toward a type of fatalism. At times, they can use this logic to opt to transgress against others, shrugging their shoulders and saying that it was not them per se, it was instead whatever non-free-will mechanism that they assert brought it to fruition.
Of course, under a non-free-will viewpoint, maybe those that kept trying to think things through were meant to do so, as due to the third-party or due to the script or due to the cause-and-effect, while people that shift into seemingly being purely arbitrary are actually under the spell of one of those predetermined approaches.
One additional twist is the camp that believes in free-won’t.
Let’s consider the free-won’t aspects.
Maybe you do have some amount of free-will, as per my earlier suggestion that there could be a kind of loosey goosey version, but the manner of how it is exercised involves a veto-like capability.
Here’s how that might work. You non-free-will aims to get you to wave your arm in the air, which accordingly you would undertake to do, since we’re saying for the moment you don’t have free-will to choose otherwise.
The free-won’t viewpoint is that you do have a kind of choice, a veto choice. You could choose to not do the thing that the non-free-will stated, and therefore you might choose to not wave your arm. In this free-won’t camp, note that you weren’t the originator of the arm waving. You were the recipient of the arm waving command, yet you were able to exercise your own discretion and veto the command that was somehow otherwise given to you.
An important construct usually underlying this viewpoint is that you could not choose to do anything else, since that’s up to the non-free-will origination aspects, and all you can do is choose to either do or not do the thing that the non-free-will commanded. Thus, your veto could be to not wave your arm, but you cannot then decide to kick your feet instead. Nope. The kicking of your feet has to originate via the non-free-will, of which then your free-won’t get-out-of-jail card allows you to decide not to kick your feet, if that’s what you want to choose to do.
Those that are the binary types will quickly say you obviously don’t have free-will in the use case of having free-won’t, in that you don’t have true free-will, and you have this measly free-won’t, a far cry from an unburdened free free-will. Others would say that you do have free-will, albeit maybe somewhat limited in scope and range.
I think that lays enough groundwork for moving further into the discussion overall. Do keep in mind that the aforementioned indication is just the tip of the iceberg on the topic of free-will. I’ve left out reams of other angles on the topic. Consult your philosophers stone for further information about free-will.
Can Free-Will Be Detected Via Neuroscience
So far, it’s been suggested that for humans, we really cannot say for sure whether we have free-will or not. You can make a claim that we do have free-will, but you then have to presumably prove that there isn’t this non-free-will that is over-the-top of free-will. Some say that the burden of proof needs to be on the non-free-will believers, meaning they need to showcase proof of the non-free-will, otherwise the default is that there is free-will.
Another means to try and break this logjam might be to find one “provable” instance of the existence of free-will, which at least then you could argue that free-will exists, though maybe not all the time and nor everywhere and nor with everyone.
Likewise, some say that if you could find one “provable” instance that there is the existence of non-free-will, you could argue that there is at least one case of non-free-will that presumably overpowers free-will, which might not be the case all the time or for everywhere and nor for everyone, yet it does nonetheless exist (if so proven).
This fight over free-will has drawn scrutiny by just about every domain or discipline that bears on the topic. The field of philosophy is the most obvious such domain. There is also the field of psychology, trying to unlock the mysteries of the mind, as does the field of cognitive science. We can also pile into this the neurosciences, which likewise aims to gauge how the brain works, and ultimately how the brain arrives at the act of thinking.
One key study in neuroscience that sparked quite a lot of follow-on effort was undertaken by Benjamin Libet, Curtis Gleason, Elwood Wright, and Dennis Pearl in 1983 (see https://academic.oup.com/brain/article-abstract/106/3/623/271932).
In their study, they attempted to detect cerebral activity and per their experiment claimed that there was brain effort that preceded conscious awareness of performing a physical motor-skilled act by the human subjects, as stated by the researchers:
“The recordable cerebral activity (readiness-potential, RP) that precedes a freely voluntary, fully endogenous motor act was directly compared with the reportable time (W) for appearance of the subjective experience of ‘wanting’ or intending to act. The onset of cerebral activity clearly preceded by at least several hundred milliseconds the reported time of conscious intention to act.”
Essentially, if you were told to lift your arm, presumably the conscious areas of the brain would activate and send signals to your arm to make it move, which all seems rather straightforward. This particular research study suggested that there was more to this than meets the eye. Apparently, there is something else that happens first, hidden elsewhere within your brain, and then you begin to perform the conscious activation steps.
You might be intrigued by the conclusion reached by the researchers:
“It is concluded that cerebral initiation of a spontaneous, freely voluntary act can begin unconsciously, that is, before there is any (at least recallable) subjective awareness that a ‘decision” to act has already been initiated cerebrally. This introduces certain constraints on the potentiality for conscious initiation and control of voluntary acts.”
Bottom-line, this study was used by many to suggest that we don’t have free-will. It is claimed that this study shows a scientific basis for the non-free-will basis. Furthermore, the time delay between the alleged subconscious effort and the conscious effort initiation became known as Libet’s W, the amount of time gap between the presumed non-free-will and the exercising of some limited kind of free-will (Libet had stated that there might be a free-won’t related to the free-will portion, involving a conscious veto capability).
Not everyone sees this study in the same light. For some, it is a humongous leap of logic to go from the presumed detection of brain activity prior to other brain activity that one assumes is “conscious” activity, and then decide that the forerunner activity had anything at all to do with either non-free-will or free-will.
Many would contend that there is such a lack of understanding about the operations of the brain that making any kind of conclusion about what is happening would be treading on thin ice. There is also the qualm that these were acts involving motor skills, which are presumably going to take much long, orders of magnitude, for the enactment of, due to the physical movements, while the brain itself is able to perform zillions of mental operations in that same length of time.
Does the alleged “unconscious” related brain activity suggest that there is something afoot here, namely that it perhaps supports the theories about a omnipresent third-party that is maybe controlling the brain, or that the script theory is correct and the brain is retrieving a pre-planted script from within the recesses of your noggin, or maybe the cause-and-effect theory is validated since this shows that the “conscious” act was controlled by the “unconscious” causal effect. And so on.
There have been numerous other related neuroscience studies, typically trying to further expound on this W and either confirm or disconfirm via related kinds of experiments. You can likely find as many opponents as proponents about whether these neuroscience studies show anything substantive about free-will.
For my article about the irreproducibility problem, see: https://www.aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/
For my article about the importance of transparency in research, see: https://www.aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/
For aspects of plasticity and the brain, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/
On the topic of self-awareness, take a look at my article: https://www.aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/
Another qualm some have is that these are usually done as retrodiction-oriented studies, meaning that they involve examining the data after-the-fact and trying to interpret and reach conclusions thereof. Some assert that you would need to try and figure out what the brain is doing while it is actually happening, in the midst of acting, rather than recording a bunch of data and then afterward sifting through it.
For those of you are intrigued by this kind of neuroscience pursuit, you might keep your eye on the work taking place at the Institute for Interdisciplinary Brain and Behavioral Sciences at Chapman University, which has Dr. Uri Maoz as the project leader for a multi-million dollar non-federal research grant that was announced in March 2019 on the topic of conscious control of our decisions and actions as humans, along with Dr. Amir Raz, professor of brain sciences and director. Participants in the effort include Charité Berlin (Germany), Dartmouth, Duke, Florida State University, Harvard, Indiana University Bloomington, NIH, Monash University (Australia), NYU, Sigtuna (Sweden), Tel Aviv University (Israel), University College London (UK), University of Edinburgh (UK), and researchers at UCLA and Yale.
Stepwise Actions and Processes
Some would argue that the brain does not necessarily operate in a stepwise fashion and that it is raft with parallelism. Therefore, trying to lay claim that A happens before B is somewhat chancy, when in fact the odds are that A an B are actually happening at the same time or in some kind of time overlapping manner. It is perhaps more nonlinear than it is linear, and only our desire to simplify how things work involves flattening the brain operations into a step-at-a-time sequential description.
Be that as it may, let’s for the moment go along with the notion of an overarching linear progression, and see where that takes us.
Consider that we have a human that is supposed to move their arm, the end result of the effort involves the arm movement, and presumably to get their arm to move there is some kind of conscious brain activity to make it happen.
We have this:
Conscious effort -> Movement of arm
According to some of the related neuroscience research, those two steps are actually preceded by an additional step, and so I need to include the otherwise hidden or unrealized step into the model we are expanding upon herein.
Unconscious effort -> Conscious effort -> Movement of arm
Let’s add labels to these, as based on what some believe we can so label:
Unconscious effort (non-free-will) -> Conscious effort (free-will that’s free-won’t) -> Movement of arm
Here’s a bit of a question for you, does the conscious effort realize that there is an unconscious effort (namely the unconscious effort that precedes the conscious effort), or is the conscious effort blissfully unaware about the unconscious effort (which presumably launched the conscious effort)?
You might say that the question relates to the earlier discussion about the knowingness or lack thereof about the non-free-will initiations. I’ve stated that some believe there is an undetectable third-party or a laid-out script or a cause-and-effect, none of which are seemingly knowable to us humans and therefore we can neither prove or disprove that these non-free-will controllers are acting upon us.
Maybe the conscious effort is blind to the unconscious effort, and perhaps is acting as though it is under free-will, yet it is actually not.
Or, one counter viewpoint is that maybe the conscious and unconscious work together, knowingly, and are really one overall brain mechanism and it is a fallacy on our part to try and interpret them as separate and disjointed.
Is the conscious effort a process, of its own, running on its own, or so it assumes, or might the unconscious effort and the conscious effort be running in concert with each other?
For that matter, I suppose we could even ponder whether the unconscious effort is knowingly sparking the conscious effort, which instead maybe the unconscious effort is its own independent process and it has no idea that it causes something else to happen after it acts.
I don’t want to go down this rabbit hole to far, for now, and bring up what seems perhaps to be rather abstract in order to make this discussion paradoxically more concrete.
How can we make this more concrete?
Notice that I’ve referred to the unconscious effort and the conscious effort as each being a process. If we shift this discussion now into a computer-based model of things, we might say that we have two processes, running on a computer, and for which they might involve one process preceding the other, or not, and they might interact with each other, or not.
These are processes happening in real-time.
It could be that either of the two processes knows about the other. Or, it could be that the two processes do not know about each other.
For anyone that designs and develops complex real-time computer-based systems, you have likely dealt with these kinds of circumstances. You have one or more processes, operating in real-time, and some of which will have an impact on the other processes, at times being in front of some other process, at other times taking place after some other process, and all of which might or might not be directly coordinated.
Consider a modern-day car that has a multitude of sensors and is trying to figure out the roadway and how to undertake the driving task.
You could have a process that involves collecting data and interpreting the data from cameras that are on the car. You might have a process that does data collection and interpretation of radar sensors. The process that deals with the cameras and the process that deals with the radar could be separate and distinct, neither one communicates with the other, neither one happens before or necessarily after the other. They operate in parallel.
For my article about processes and the classic Sleeping Barber problem, see: https://www.aitrends.com/selfdrivingcars/sleeping-barber-problem-and-ai-self-driving-cars/
For aspects about cognition timing, see my article: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
For how processes deal with faultiness, see my article: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/
For the aspects of processes that are devised to argue with each other, see my article:https://www.aitrends.com/features/ai-arguing-machines-and-ai-self-driving-cars/
AI Free-Will Question and Self-Driving Cars Too
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. The AI system is quite complex and involves thousands of simultaneously running processes, which is important for purposes of undertaking needed activities in real-time, but also offers potential concerns about safety and inadvertent process-related mishaps.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Let’s return to the discussion about free-will.
AI Systems With Or Without Free-Will
Can an AI system have free-will?
This is a somewhat hotly debated topic these days. There are some that are worried that we are in the midst of creating AI systems that could become presumably sentient, and as a result, maybe they would have free-will.
You might say, great, welcome to the free-will community, assuming you believe that humans have free-will, and might believe it’s a boon to the free-will population to have AI machine-based free-willers around.
On the other hand, some are suggesting that an AI that has free-will might not tow the line in terms of what we humans want the AI to be or do. It could be that the free-will AI decides it doesn’t like us and using its own free-will opts to wipe us from earth or enslave us. This would certainly seem like a rather disappointing turn of events, namely that we somehow spawned free-will into machines and they turn on us, rather than being grateful or at least respectful of us.
There are all sorts of twists and turns in that debate. If we as humans don’t have free-will, presumably the creation of AI would also not have free-will, since it is being crafted by the non-free-will that forced us or led us to make such AI. Or, you could say that the non-free-will decided that it was time to allow for true free-will and figured that doing so might be wasted on humans, and as a result allowed the humans to make something that does have free-will. On and on this goes around.
I’d like to tackle at least one aspect that I believe seems to be to me be relatively clear cut.
For today’s AI, tossing into it the best that anybody in AI can do right now in terms of Machine Learning and Deep Learning, along with deep Artificial Neural Networks, it would seem like this is really still a Turing Machine in action. I realize this is a kind of proof-by-reduction, in which I am saying that one thing reduces to the equivalent of another, but I think it is fair game.
Would anyone of any reasonable nature be willing to assert and genuinely believe that a Turing Machine can somehow embody or exhibit free-will?
I dare say it just seems over-the-top to think it has or could have free-will. Now, I realize that also takes us into the murky waters of what is free-will. Without getting carried away here and having to go on and on, I would shorten this to say that a Turing Machine has no such spark that we tend to believe is part of human related free-will.
I’m sure that I’ll get emails right away and criticized that I’ve said or implied that we cannot ever have AI that might have free-will (if there is such a thing), which is not at all what I’ve said or implied, I believe. For the kind of computer based systems that we use today, I believe I’m on safe ground about this, but I quite openly say that there are future ways of computing that might well indeed go beyond what we can do today, and whether or not that might have a modicum of free-will, well, who’s to say.
For my article about the singularity, see: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/
For the super-intelligence future dangers we might face, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/
For conspiracies about AI systems and the takeover, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/
For my article about whether AI is a Frankenstein, see: https://www.aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/
For the Turing Test and AI, see my article: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/
AI Self-Driving Cars and Lessons Based on Free-Will Debate
Let’s assume that we are able to achieve Level 5 self-driving cars. If so, does that mean that AI has become sentient? The answer is not necessarily.
Some might say that the only path to a true Level 5 self-driving car involves having the AI be able to showcase common-sense reasoning. Likewise, the AI would need to have Artificial General Intelligence (AGI). If you start cobbling together those aspects and they are all indeed a necessary condition for the advent of Level 5, one supposes that the nearness to some kind of sentience is perhaps increasing.
It seems like a fairly sound bet that we can reach Level 5 without going quite that far in terms of AI advances. Albeit the AI driving won’t perhaps be the same as human driving, yet it will be sufficient to perform the Level 5 driving task.
I’d like to leverage the earlier discussion herein about processes and relate that aspect to AI self-driving cars. This will give a chance to cover some practical day-to-day ground, rather than the otherwise lofty discussion so far about free-will, which was hopefully interesting, and led us to consider some everyday perfunctory matters too.
Let’s start with a use case that was brought up during a recent event by Tesla that was known as their Autonomy Investor Day and involved a car and a bicycle and how the capabilities of automation might detect such aspects (the Tesla event took place on April 22, 2019 at Tesla HQ and was live-streamed on YouTube).
For my article about common-sense reasoning and AI, see: https://www.aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/
For aspects about AGI, see my article: https://www.aitrends.com/selfdrivingcars/genius-shortage-hampers-solving-ai-unsolved-problems-the-case-of-ai-self-driving-cars/
For idealism and AI pursuits, see my article: https://www.aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/
For the dangers of noble cause corruption and AI aspects, see: https://www.aitrends.com/selfdrivingcars/noble-cause-corruption-and-ai-the-case-of-ai-self-driving-cars/
Use Case of The Bike On Or Off The Car
Suppose you have an AI self-driving car that is scanning the traffic ahead. Turns out that there is a car in front of the self-driving car, and this car has a bike that’s sitting on a bike rack, which is attached to the rear of the car. I’m sure you’ve seen this many times. If you want to take your bicycle someplace, you put a bike rack onto the back of your car, and you then mount the bike onto the bike rack.
The variability of these bike racks and mountings can be somewhat surprising.
There are some bike racks that can hold several bikes at once. Some bike racks can only handle one bike, or maybe squeeze in two, and yet the person mounted say four bikes onto it. I’ve seen some mounted bikes that were not properly placed into the rack and looked as though they might fall out at any moment.
A friend told me that one time she saw a bike come completely off the bike rack, while a car was in-motion, which seems both frightening and fascinating to have seen. Frightening because a bike that becomes a free wheeling (ha, almost said free-will!) object on the roadway, beyond the control of a human bike rider, well, it is a scary proposition for nearby traffic and nearby pedestrians.
Imagine if you were riding your own bike in the bike lane, minding your own business, riding safely, and another bike suddenly flew off the rear of a car and smashed into you. I dare say no one would believe your story.
Suppose you were driving a car and came upon the madcap bike; it creates difficult choices. A small dropped item like a hubcap you might be willing to simply run over, rather than making a radical and potentially dangerous driving maneuver, but a bike is a sturdier and larger object and one that by striking could do a lot of damage to the car and the bike. In a split second, you’d need to decide which was the better choice, avoid the zany bike and is so doing perhaps endanger yourself and other traffic, or ram into the bike, and possibly endangering yourself or other traffic. Neither option is pleasant.
I did see something about a mounted bike that caught my attention one day. The bike was mounted incorrectly and protruded far beyond the rightmost side of the car. This became a dangerous kind of dagger, poking over into the lane to the right of the car. I wondered whether the driver realized what they had done, or whether they were oblivious and had not realized the predicament that had created for all other nearby car traffic.
I watched as several cars approached in the right lane, adjacent to the car with the improperly mounted bike, which was in the left lane. Those cars often seemed to fail to discern the protruding element until the last moment. Car after car would swerve suddenly to their right, attempting to avoid the spoked wheel of the bike. The swerving was not overly dangerous when there was no other traffic to the further right, but when there was other such traffic, the swerving avoiders would cause other cars in those further right lanes to also weave and semi-panic.
In any case, let’s consider that there is a process in the AI system that involves trying to detect cars that are nearby to the AI self-driving car. This is typically done as a result of Machine Learning and Deep Learning, involving a deep Artificial Neural Network getting trained on the images of cars, and then using that trained capability for real-time analyses of the traffic surrounding the self-driving car.
You might have a second process that involves detecting bicycles. Once again, it is likely the process was developed via Machine Learning and Deep Learning and consists of a deep Artificial Neural Network that was trained on images of bikes.
For the moment, assume then that we have two processes, one to find cars in the camera images and video streaming while the self-driving car is underway, and a second process to find bicycles.
For my article about street scene analyses by AI, see: https://www.aitrends.com/selfdrivingcars/street-scene-free-space-detection-self-driving-cars-road-ahead/
For aspects about federated Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/
For my article about Deep Learning aspects, see: https://www.aitrends.com/selfdrivingcars/ai-machine-child-deep-learning-the-case-of-ai-self-driving-cars/
For the handling of roadway debris by AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/
During the Tesla event, an image was shown of a car with a bike mounted on a rear bike rack. It was demonstrated that the neural network automation was detecting both the car and the bike, each as independent objects.
Now, this could be disconcerting in one manner, namely if the AI is under the belief that there is a car ahead of the self-driving car, and there is also a bike ahead of the self-driving car, each of which is doing their own thing. You might be startled to think that these would be conceptually two different matters. As a human, you know that the bike is really mounted on the car and not under its own sense of motion or actions. The bike is going along for the ride, as it were.
I guess you could say that the bike has no free-will at this moment and is under the non-free-will exerted control of the car.
If the AI though is only considering the car as a separate matter, and the bike as a separate matter, it could get itself tied into a bit of a knot. The bike is facing in some particular direction, depending upon how it was mounted, so let’s pretend it is mounted with the handle bars on the right-side of the car. The programming of the AI might be that it assumes a bicycle will tend to move in the direction of the handlebars, normally so.
Imagine the curious nature then of what the AI is perceiving. A car is ahead of the self-driving car. It is moving forward at some speed and distance from the self-driving car. Meanwhile, there’s a bike that is just at the same distance, moving at the same speed but doing so in an oddball manner, it is moving forward yet it is facing to the side.
Where is the bike next going to be?
The standard assumption would be that the bike will be moving to the right, and thus it would be a reasonable prediction to anticipate that the bike will soon end-up to the right. If the car with the mounted bike continues straight ahead, the bike obviously won’t end-up going to the right. Of course, if the car with the mounted bike were to move into the right lane, it would likely lend credence to the notion that the bike is moving and has now been bicycled into the right lane.
One viewpoint of this matter from an AI systems perspective is that the car ahead should be considered as a large blob that just so happens to have this other thing on it, but that it doesn’t care what that thing is. All that is needed is to realize that the car is of a size NxM, which encompasses the added scope of the bike.
So, we have two processes, one finding cars, one finding bikes, and the bike finding process is potentially misleading the rest of the AI system by trying to clamor that there is a bike ahead of the self-driving car. The AI developers realized that this is both true and false at the same time, being that there is a bike there, but it is not a free-wheeling bike.
One reaction by the AI developers involves “fixing” the AI system to ignore a bike when it is seemingly mounted on the back of a car. There is presumably no need to detect such a bike. It doesn’t matter that it so happens to be a bike. If the car had a piano mounted on the back of the car, it wouldn’t matter that it was a piano, and instead merely noteworthy that the car is larger in size than might usually be the case (when you include the scope of the piano).
I certainly grasp this approach, yet it also seems somewhat worrisome.
A human knows that a bike is a bike. A bike has wheels and it can roll around. A human knows that a bike mounted on the back of a car can come loose. A bike that comes loose can possibly fall onto the roadway like a wooden pallet, making a thud and not going anywhere, or it could potentially move more freely due to the wheels. Of course, without a bike rider, presumably the bike is not going be able to ride along per se, yet with the motion already underway as a result of being on the car, there’s a chance that the bike could “roll” for some distance.
You might be objecting and saying that the odds of a bike coming off a bike rack is slim, and it would also seem slim that once the bike did fall off that it would move along on the roadway. As such, with such slim odds, it seems like a rather remote edge case and you can just reduce the whole topic to not caring about the bike, instead relying upon some other part of the AI that might deal with debris that falls onto the street.
The counter argument is that it is still worthwhile to realize that the bike is a bike, being able to therefore gauge what might happen if the bike does fall off the car. It might be best to be proactive and anticipate that such a mishap might occur, rather than waiting until it does happen and having to react, not having gotten prepared for the possibility of the mishap.
This all ties too to the topic of how much should AI systems be doing defensive driving tactics, which most are not yet doing. By-and-large, the focus by most auto makers and tech firms has been the reactive side of driving. React once something happens is the focus, rather than trying to anticipate what might happen. Novice drivers tend to be the same way.
I’ve emphasized many times in my writings and speeches that the lack of defensive driving tactics for the AI systems will make them brittle and vulnerable. I don’t view that defensive driving as an edge or corner case to be handled at some later time, which regrettably some others do.
For more about edge or corner cases, see my article: https://www.aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/
For my article about the aspects of defensive driving tactics, see: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/
For why AI self-driving cars need a bit of greed, see my article: https://www.aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/
For my article about the brittleness aspects, see: https://www.aitrends.com/ai-insider/machine-learning-ultra-brittleness-and-object-orientation-poses-the-case-of-ai-self-driving-cars/
For the role of micro-movements, see my article: https://www.aitrends.com/ai-insider/micro-movements-in-driving-behaviors-crucial-for-ai-self-driving-cars/
When discussing the topic of free-will, it can become quite abstract and tilt towards the theoretical and the philosophical side of things. Such discussions are worthwhile to have, and I hope that my offering of a taste of it will be of interest to you, perhaps spurring you to look further into the topic.
I’ve tried to also bring some of the topic to a more day-to-day realm. You can think of the free-will and non-free-will discussion as being about control or lack-of-control over processes (in a more pedantic, mundane way, perhaps).
When developing real-time AI systems, such as AI self-driving autonomous cars, you need to be clearly aware of how those processes are running and what kind of control they have, or lack thereof.
If you are the type of reader that began reading this article and upon my opening remark that maybe or maybe not that I would reveal whether humans have free-will, and if you then skipped the entire piece and jumped just to this conclusion, in hopes of seeing what I proclaimed, well, you’ll have to do the hard work and actually read the whole piece.
You can then decide whether or not I did state whether free-will exists or not, doing so by your own choice of opting to actually read the piece. That’s of your own free-will. Or is it?
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.