On Whether AI Can Form ‘Intent’ Including In The Case Of Autonomous Cars

6135
AI Trends Insider Lance Eliot explores whether the AI of a self-driving car or any other AI has any more intent than an inanimate toaster. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

These remarks all have something in common:

  • The devil made me do it
  • I didn’t mean to be mean to you
  • Something just came over me
  • I wanted to do it
  • You got what was coming to you
  • My motives were pure

What’s that all about?

You could say that those are all various ways in which someone might express their intent or intentions.

In some instances, the person is seemingly expressing their intent directly, while in other cases they appear to be avoiding being pinned down on their intentions and are trying to toss the intent onto the shoulders of someone or something else.

When we express our intent, there is no particular reason to necessarily believe that it is true per se.

A person can tell you their intentions and yet be lying through their teeth.

Or, a person can offer their intentions and genuinely believe that they are forthcoming in their indication, and yet it might be entirely fabricated and concocted as a kind of rationalization after-the-fact.

Consider too that a person might be offering acrid cynical remarks, for which their intention is buried or hidden within their words, and you accordingly need to somehow decipher or tease out the real meaning of their quips.

There is also the straightforward possibility that the person is utterly clueless about their intention, and thus are unable to precisely state what their intent is.

And so on.

This naturally leads us to contemplate what intent or intention purports to consist of.

The common definition of intent or intention is that it involves the act of determining something that you want and plan to do, and usually emphasizes that the effort of “intent” encompasses mentally determining upon some action or result.

By referring to the mind or mental processing, the word “intent” opens quite a Pandora’s box.

Simply stated, there is no ironclad way to know what someone’s mind contains or did contain.

We do not have any means to directly and fully interrogate the brain and have it showcase to us the origins of thoughts and how they came to exist. Our brains and our minds are locked away in our skulls, and the only path to figuring out what is going on consists of poking around from the outside or marginally so from the inside.

Now, yes, you can try using an MRI and other techniques to try and gauge the electromagnetic or biochemical activity of the brain, but be clear that this is a far cry from being able to connect-the-dots directly and be able to definitively indicate that this thought or that thought was derived from these neurons and those neurons.

We have not yet reversed engineered the brain sufficiently to make those kinds of uncontestable proclamations.

Overall, one could even argue that the whole concept of intent and intentions is somewhat obtuse and perhaps a construct of what we want to believe about our actions. Some would say that we want to believe that we do things for a reason, and therefore we offer that there is this thing called “intent” and thus it offers a rational explanation for what otherwise might be nothing of the kind.

For those that relish debating about the topic of free will, perhaps none of us have any capability of intent and we are all pre-programmed to carry out acts, none of which relates to any personal intent and we are simply acting as puppets on a string.

To see my explanation about the nature of free will and AI, visit this link here: https://www.aitrends.com/ai-insider/is-there-free-will-in-humans-or-ai-useful-debate-and-for-ai-self-driving-cars-too/

On the aspects of human-in-the-loop versus out-of-the-loop, see my indication here: https://www.aitrends.com/ai-insider/human-in-the-loop-vs-out-of-the-loop-in-ai-systems-the-case-of-ai-self-driving-cars/

For aspects about trying to achieve Einstein levels of AI, see my coverage: https://www.aitrends.com/ai-insider/considering-the-practical-impacts-of-achieving-einstein-level-ai/

To learn about robots that might drive autonomously, see my explanation here: https://www.aitrends.com/ai-insider/what-if-we-made-a-robot-that-could-drive-autonomously/

More On The Nature Of Intentions

I don’t want to go too far off the rails here but did want to mention the philosophical viewpoint that intent might not exist in any ordinary manner and we cannot assume as such that it does.

Since we are on a roll here about thinking widely, there is a handy catchphrase about intent from George Bernard Shaw that offers additional food for thought: “We know there is intention and purpose in the universe, because there is intention and purpose in us.”

Notice that this is quite reassuring, namely that since we generally believe that there is intention within us, ergo this somehow implies that there is an intention in the universe, and therefore we can remain sanguine and be comforted that everything has a meaning and intention (though some might counterargue that the universe and we are all completely random and purposeless).

While we are teetering on the edge of this precipice, let’s keep going.

Maybe intent and intention is a cover-up for the acts of humanity.

If you do something adverse, the intent might be a means to placate others about your dastardly deed and act as a distractor from the act committed.

On the other hand, maybe your act was well-intended, yet it led to something adverse, inadvertently and not by design, therefore your intention ought to be given due weight and consideration.

Time to quote another fascinating insight about intent, this one by the revered George Washington: “A man’s intentions should be allowed in some respects to plead for his actions.”

Note that Washington’s quote refers to man’s intentions, but we can reasonably allow the meaning to include all of mankind, making the quote to encompass both men and women, restated as a person’s intentions should be allowed in some respects to plead for their actions.

Overall, mankind certainly seems to have accepted the stark and generally unchallenged belief that there are intentions and that those intentions are crucial to the acts we undertake.

That being the case, what else has intentions?

Does your beloved pet dog or cat have intentions?

Do all animals have intentions of one kind or another?

There is an acrimonious debate about the idea that animals can form intentions.

Some say that it is the case that they do, while others contend that they quite obviously cannot do so. The usual basis for arguing that animals cannot have intentions is that they mentally are too limited and that only humans have the mental capacity to form intent or intentions. Be careful making that brash claim to any dog or cat lover.

Can a toaster have an intention?

I ask because the other day, my toaster burnt my toast.

Did the toaster do so intentionally, or was it an unintentional act?

You might be irked at such a question and immediately recoil that the toaster obviously lacks any semblance of intent. It is merely a mindless machine that makes toast.

There isn’t any there, there.

Without the ingredient or essential component of mental processing, you would seem to be hard-pressed to ascribe intent to something so ordinary and mechanical.

This brings us to a most intriguing twist and the intended focus of this discussion, namely, where does AI fit into this murky matter of intent and intention.

AI systems are increasingly becoming a vital part of our lives.

There are AI systems that do life-impacting diagnoses of X-ray charts and seem to discern whether there is disease present. There are AI systems that decide whether you can get a car loan that you wanted to obtain. Etc.

Is AI more akin to humans and therefore able to form intent, or is AI more similar to a toaster and unable to have any substance of intent?

Lest you think this is an entirely abstract point and not worthy of real-world attention, consider the legal ramifications of whether AI can form intent and whether this is noteworthy or not.

In our approach to jurisprudence, we give a tremendous amount of importance to intent, sometimes referred to as scienter in legal circles, and criminal law makes use of intent to ascertain the nature of the crime that can be assigned and the penalty that might ride with the crime undertaken.

A toaster that goes awry will hopefully be a mildly adverse consequence (I can choose to eat the burnt toast or toss it into the trash), while if an AI system that can drive a car goes awry, the result can be catastrophic.

Using AI for the driving of cars is a life-or-death instance of AI that is emerging for use in our daily lives.

When you see a car going down the street and there isn’t a human driver at the wheel, you are tacitly accepting the belief that the AI can drive the car and will not suddenly veer into a crowd of pedestrians or plow into a car ahead of it.

You might counter-argue that the same can be said of human drivers, whereby when a human driver is at the wheel, you likewise are accepting the belief that the human will not suddenly ram into pedestrians or other cars.

If the human did so, we’d all be quickly looking for intent.

Can we do the same for AI driving systems in terms of the actions that they undertake, and does it make sense to even try to ascertain such AI-based intent?

Today’s question then is this: As an example of AI and intent, do we expect AI-based true self-driving cars to embody intention and if so, what does it consist of and how would we know that it exists?

For aspects of AI and Machine Learning brittleness, see my indication here: https://www.aitrends.com/ai-insider/goto-fail-and-ai-brittleness-the-case-of-ai-self-driving-cars/

On theories of the triune of the human mind and how it relates to AI, refer to this link: https://www.aitrends.com/ai-insider/your-lizard-brain-and-the-ai-triune-use-case-of-autonomous-cars/

To consider whether AI can possess emotions, see my analysis here: https://www.aitrends.com/ai-insider/ai-empathetic-computing-the-case-of-ai-self-driving-cars/

The AI paperclip problem provides added insights, see my explanation here: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

The Levels Of Self-Driving Cars

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Let’s return to the discussion about intent.

Is the AI that can perform self-driving the same as a toaster?

Intuitively, we might right away proffer that the AI is not at all like a toaster and that making such a callous suggestion undercuts what the AI is accomplishing in being able to drive a car.

Before we dig further into this aspect, I’d like to set the record straight about the AI that can drive a car.

Some assume that the AI needed to drive a car must be sentient, able to “think” and perform mental processing on an equivalent basis of humans. So far, that’s not the case, and it seems that we’ll be able to have AI-based self-driving cars without crossing over into the vaunted singularity (the singularity is considered the moment or occurrence of having AI that transforms from being everyday computational and becoming sentient, having the same unspecified and ill-understood spark that mankind seems to have).

For the moment, remove sentience from this discussion as to the capabilities of AI, and assume that the AI being depicted is computer-based and has not yet achieved human-like equivalency of intelligence. If AI does someday arrive at the singularity, presumably we would need to have an altogether new dialogue about intent, since at that point the AI would be apparently the “same” as human intelligence in one manner or another and the role of intent in its actions would rightfully come onto the table, for sure.

Consider then these forms of intent:

  1. Inscrutable Intent
  2. Explicated Intent
  3. AI Developer Intent
  4. Inserted Intent
  5. Induced Intent
  6. Emergent Intent

Elaborating Each Of the Forms Of Intent

Let’s start with the notion of inscrutable intent.

It could be that the AI system has an intent, and yet we have no means to figure out what the intent is.

For example, the use of Machine Learning (ML) and Deep Learning (DL) oftentimes uses large-scale artificial neural networks (ANNs), which are essentially computer-based simulations of somewhat along the lines of what we believe brains do, though the ML/DL of today is extremely simplistic in comparison and not at all akin to the complexities of the human brain.

In any case, the ML/DL is essentially a mathematical model that is computationally being performed, out of which there is not necessarily any logical basis to explain the inner workings. There are just calculations and arithmetic’s taking place. As such, it is generally considered “inscrutable” if there is no ready means to translate this into something meaningful in words and sentences that would constitute an articulated indication of intent.

Next, consider explicated intent.

Some believe that we might be able to do a type of translation of what is happening inside the AI system, and as such, there is a rising call for XAI, known as explainable AI. This is AI that in one fashion or another has been designed and developed to explain what it is doing, and thus one might say that could showcase explicated intent.

Many argue that you can just drop the whole worry about AI intention and look instead at the AI developer that crafted the AI.

Since AI is a human-created effort, the human or humans that put it together are the intenders, and therefore the intention of the AI is found within the intentions of those humans.

The difficulty with this human-only as intention source is that the human developer might have crafted AI that goes beyond what the AI developer had in mind.

What do we do when the AI morphs in some manner and no longer abides by what the original human developers intended?

You could argue that no matter what the AI does or becomes, the human developers are still responsible and thus they cannot escape the intention hunt simply by raising their arms and protesting that the AI went beyond their intended aims.

This takes us to the next form of intent, inserted intent.

Essentially, when AI developers craft an AI system, there is an embodiment of “intent” into the computational encoding.

When writing code in say Python or Java or LISP, you could reasonably make the case that the code itself is a reflection of the intent that the human had in their mind. Likewise, even with ML/DL, you could argue that the nature of how the ANN’s are set up and trained is a reflection of the intention of the human developers and therefore the structure leaves a kind of trace or residue which reflects intent.

Induced intent consists of the AI itself using the foundational intent that was implanted by the human developers and deriving new intent on top of that cornerstone. I do not want to suggest this is some anthropomorphic amalgamation. More simply stated, the code or underlying structure changes, and as such the presumption of underlying intent changes too.

Finally, the utmost of the induced intent consists of emergent intent, the next level beyond induced, as it were. In the case of emergent intent, the “intent” of the AI becomes relatively far removed from any initial intent that was either inserted or induced and seemingly becomes more semi-independent in appearance.

For AI in self-driving cars, some critics point out we do not have as yet any standardized means to identify what the AI intent consists of, other than to resort to asking the AI developers what they did or by trying to scrutinize byzantine code.

I’ve predicted that once AI self-driving cars become more prevalent, we will begin to see more and more lawsuits that seek redress when an AI self-driving car gets into a car crash or other car incident.

You can readily bet that the notion of intent is going to be raised.

Right now, the matter remains open-ended.

For details about prevalence induced behavior, see my explanation here: https://aitrends.com/ai-insider/prevalence-induced-behavior-and-ai-self-driving-cars/

On the difficulties of trying to achieve one-shot Machine Learning, here’s my indication: https://www.aitrends.com/ai-insider/seeking-one-shot-machine-learning-the-case-of-ai-self-driving-cars/

To understand the nature of undue self-imposed AI constraints, see my analysis: https://www.aitrends.com/ai-insider/self-imposed-undue-ai-constraints-and-ai-autonomous-cars/

Conclusion

If you strictly adhere to the assumption that intent is a mental activity or form of mental processing, you can presumably stand on that high ground and assert that the AI of today is not akin at all to human mental prowess.

On the other hand, if you are willing to stretch the notion of mental processing to encompass the AI systems of today, it does seem to open the door to questions about intent.

Intent is a multi-faceted topic, ranging across many disciplines including AI, neuroscience, cognitive science, psychology, sociology, law, and other domains. One aspect that seems clear cut is that the question of intent and how it relates to AI will continue to be intentionally a matter of great and vital concern.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]