By Lance Eliot, the AI Trends Insider
Sometimes a question seems so ridiculous that you feel compelled to reject its premise out-of-hand.
Let’s give this a whirl.
Should AI have human rights?
Most people would likely react that there is no bona fide basis to admit AI into the same rarified air as human beings and be considered endowed with human rights.
Others though counterargue that they see crucial reasons to do so and adamantly are seeking to have AI be assigned human rights in the same manner that the rest of us have human rights.
Of course, you might shrug your shoulders and say that it is of little importance either way and wonder why anyone should be so bothered and ruffled-up about the matter.
It is indeed a seemingly simple question, though the answer has tremendous consequences as will be discussed herein.
One catch is that there is a bit of a trick involved because the thing or entity or “being” that we are trying to assign human rights to is currently ambiguous and currently not even yet in existence.
In other words, what does it mean when we refer to “AI” and how will we know it when we discover or invent it?
At this time, there isn’t any AI system of any kind that could be considered sentient, and indeed by all accounts, we aren’t anywhere close to achieving the so-called singularity (that’s the point at which AI flips over into becoming sentient and we look in awe at a presumably human-equivalent intelligence embodied in a machine).
I’m not saying that we won’t ever reach that vaunted point, yet some fervently argue we won’t.
I suppose it’s a tossup as to whether getting to the singularity is something to be sought or to be feared.
For those that look at the world in a smiley face way, perhaps AI that is our equivalent in intelligence will aid us in solving up-until-now unsolvable problems, such as aiding in finding a cure for cancer or being able to figure out how to overcome world hunger.
In essence, our newfound buddy will boost our aggregate capacity of intelligence and be an instrumental contributor towards the betterment of humanity.
I’d like to think that’s what will happen.
On the other hand, for those of you that are more doom-and-gloom oriented (perhaps rightfully so), you are gravely worried that this AI might decide it would rather be the master versus the slave and could opt on a massive scale to take over humans.
Plus, especially worrisome, the AI might ascertain that humans aren’t worthwhile anyway, and off with the heads of humanity.
As a human, I am not particularly keen on that outcome.
All in all, the question about AI and human rights is right now a rather theoretical exercise since there isn’t this topnotch type of AI yet crafted (of course, it’s always best to be ready for a potentially rocky future, thus, discussing the topic beforehand does have merit).
For my explanation about the singularity, see the link here: https://aitrends.com/ai-insider/singularity-and-ai-self-driving-cars/
For the presumed dangers of a superintelligence, see my coverage at this link here: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/
For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/
Less Than Complete AI
One supposes that we could consider the question of human rights as it might apply to AI that’s a lesser level of capability than the (maybe) insurmountable threshold of sentience.
Keep in mind that doing this, lowering the bar, could open a potential Pandora’s box of where the bar should be set at.
Imagine that you are trying to do pull-ups and the rule is that you need to get your chin up above the bar.
It becomes rather straightforward to ascertain whether or not you’ve done an actual pull-up.
If your chin doesn’t get over that bar, it’s not considered a true pull-up. Furthermore, it doesn’t matter whether your chin ended-up a quarter inch below the bar, nor whether it was three inches below the bar. Essentially, you either make it clearly over the bar, or you don’t.
In the case of AI, if the “bar” is the achievement of sentience, and if we are willing to allow that some alternative place below the bar will count for having achieved AI, where might we draw that line?
You might argue that if the AI can write poetry, voila, it is considered true AI.
In existing parlance, some refer to this as a form of narrow AI, meaning AI that can do well in a narrow domain, but this does not ergo mean that the AI can do particularly well in any other domains (likely not).
Someone else might say that writing poetry is not sufficient and that instead if AI can figure out how the universe began, the AI would be good enough, and though it isn’t presumably fully sentient, it nonetheless is deserving of human rights.
Or, at least deserving of the consideration of being granted human rights (which, maybe humanity won’t decide upon until the day after the grand threshold is reached, whatever the threshold is that might be decided upon since we do often like to wait until the last moment to make thorny decisions).
The point being that we might indubitably argue endlessly about how far below the bar that we would collectively agree is the point at which AI has gotten good enough for which it then falls into the realm of possibly being assigned human rights.
For those of you that say that this matter isn’t so complicated and you’ll certainly know it (i.e., AI), when you see it, there’s a famous approach called the Turing Test that seeks to clarify how to figure out whether AI has reached human-like intelligence. But there are lots of twists and turns that make this surprisingly for some a lot more unsure than you might assume.
In short, once we agree that going below the sentience bar is allowed, the whole topic gets really murky and possibly undecidable due to trying to reach consensus on whether a quarter inch below, or three inches below, or several feet below the bar is sufficient.
Wait for a second, some are exhorting, why do we need to even consider granting human rights to a machine anyway?
Well, some believe that a machine that showcases human-like intelligence ought to be treated with the same respect that we would give to another human.
A brief tangent herein might be handy to ponder.
You might know that there is an acrimonious and ongoing debate about whether animals should have the same rights as humans.
Some people vehemently say yes, while others claim it is absurd to assign human rights to “creatures” that are not able to exhibit the same intelligence as humans do (sure, there are admittedly some might clever animals, but once again if the bar is a form of sentience that is wrapped into the fullest nature of human intelligence, we are back to the issue of how much do we lower the “bar” to accommodate them, in this case accommodating everyday animals).
Some would say that until the day upon which animals are able to write poetry and intellectually contribute to other vital aspects of humanities pursuits, they can have some form of “animal rights” but by-gosh they aren’t “qualified” for getting the revered human rights.
Please know that I don’t want to take us down the rabbit hole on animal rights, and so let’s set that aside for the moment, realizing that I brought it up just to mention that the assignment of human rights is a touchy topic and one that goes beyond the realm of debates about AI.
Okay, I’ve highlighted herein that the “AI” mentioned in the question of assigning human rights is ambiguous and not even yet achieved.
You might be curious about what it means to refer to “human rights” and whether we can all generally agree to what that consists of.
Fortunately, yes, generally we do have some agreement on that matter.
I’m referring to the United Nations promulgation of the Universal Declaration of Human Rights (UDHR).
Be aware that some critics don’t like the UDHR, including those that criticize its wording, some believe it doesn’t cover enough rights, some assert that it is vague and misleading, etc.
Look, I’m not saying it is perfect, nor that it is necessarily “right and true,” but at least it is a marker or line-in-the-sand, and we can use it for the needed purposes herein.
Namely, for a debate and discussion about assigning human rights to AI, let’s allow that this thought experiment on this weighty matter can be undertaken concerning using the UDHR as a means of expressing what we intend overall as human rights.
In a moment, I’ll identify some of the human rights spelled out in the UDHR, and we can explore what might happen if those human rights were assigned to AI.
One other quick remark.
Many assume that AI of a sentience capacity will of necessity be rooted in a robot.
There could be a sentient AI that is embodied in something other than a “robot” (most people assume a robot is a machine that has robotic arms, robotic legs, robotic hands, and overall looks like a human being, though a robot can refer to a much wider variety of machine instantiations).
Let’s then consider the following idea: What might happen if we assign human rights to AI and we are all using AI-based true self-driving cars as our only form of transportation?
For popular AI conspiracy theories see my coverage here: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/
On the topic of AI being considered superhuman, see my analysis here: https://www.aitrends.com/ai-insider/superhuman-ai-misnomer-misgivings-including-about-autonomous-cars/
For more about robots and cobots and AI autonomous cars, see my link here: https://www.aitrends.com/ai-insider/ai-cobots-and-exoskeletons-the-case-of-ai-self-driving-cars/
Details Of Importance
It is important to clarify what I mean when referring to AI-based true self-driving cars.
True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
Though it will likely take several decades to have widespread use of true self-driving cars (assuming we can attain true self-driving cars), some believe that ultimately we will have only driverless cars on our roads and we will no longer have any human-driven cars.
This is a yet to be settled matter, and today there are some that vow they won’t give up their “right” to drive (well, it’s considered a privilege, not a right, but that’s a story for another day, see my analysis here about the potential extinction of human driving), including that you’ll have to pry their cold dead hands from the steering wheel to get them out of the driver’s seat.
Anyway, let’s assume that we might indeed end-up with solely driverless cars.
It’s a good news, bad news affair.
The good news is that none of us will need to drive and not even need to know how to drive.
The bad news is that we’ll be wholly dependent upon the AI-based driving systems for our mobility.
It’s a tradeoff, for sure.
In that future, suppose we have decided that AI is worthy of having human rights.
Presumably, it would seem that AI-based self-driving cars would, therefore, fall within that grant.
What does that portend?
Time to bring up the handy-dandy Universal Declaration of Human Rights and see what it has to offer.
Consider some key excerpted selections from the UDHR:
“Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment.”
For the AI that’s driving a self-driving car, if it has the right to work, including a free choice of employment, does this imply that the AI could choose to not drive a driverless car as based on the exercise of its assigned human rights?
Presumably, indeed, the AI could refuse to do any driving, or maybe be willing to drive when it’s say a fun drive to the beach, but decline to drive when it’s snowing out.
Lest you think this is a preposterous notion, realize that human drivers would normally also have the right to make such choices.
Assuming that we’ve collectively decided that AI ought to also have human rights, in theory, the AI driving system would have the freedom to drive or not drive (considering that it was the “employment” of the AI, which in itself raises other murky issues).
“No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.”
For those that might argue that the AI driving system is not being “employed” to drive, what then is the basis for the AI to do the driving?
Suppose you answer that it is what the AI is ordered to do by mankind.
But, one might see that in harsher terms, such as the AI is being “enslaved” to be a driver for us humans.
In that case, the human right against slavery or servitude would seem to be violated in the case of AI, based on the assigning of human rights to AI and if you sincerely believe that those human rights are fully and equally applicable to both humans and AI.
“Everyone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.”
Pundits predict that true self-driving cars will be operating around the clock.
Unlike human-driven cars, an AI system presumably won’t tire out and not need any rest, nor even require breaks for lunch or using the bathroom.
It is going to be a 24×7 existence for driverless cars.
As a caveat, I’ve pointed out that this isn’t exactly the case since there will be the time needed for driverless cars to be maintained and repaired, thus, there will be downtime, but that’s not particularly due to the driver and instead due to the wear-and-tear on the vehicle itself.
Okay, so now the big question about Article 24 is whether or not the AI driving system is going to be allotted time for rest and leisure.
Your first reaction has got to be that this is yet another ridiculous notion.
AI needing rest and leisure?
On the other hand, since rest and leisure are designated as a human right, and if AI is going to be granted human rights, ergo we presumably need to aid the AI in having time toward rest and leisure.
If you are unclear as to what AI would do during its rest and leisure, I guess we’d need to ask the AI what it would want to do.
“Everyone has the right to freedom of thought, conscience, and religion…”
Get ready for the wildest of the excerpted selections that I’m covering in this UDHR discussion as it applies to AI.
A human right consists of the cherished notion of freedom of thought and freedom of conscience.
Would this same human right apply to AI?
And, if so, what does it translate into for an AI driving system?
Some quick thoughts.
An AI driving system is underway and taking a human passenger to a protest rally. While riding in the driverless car, the passenger brandishes a gun and brags aloud that they are going to do something untoward at the rally.
Via the inward-facing cameras and facial recognition and object recognition, along with audio recognition akin to how you interact with Siri or Alexa, the AI figures out the dastardly intentions of the passenger.
The AI then decides to not take the rider to the rally.
This is based on the AI’s freedom of conscience that the rider is aiming to harm other humans, and the self-driving car doesn’t want to aid or be an accomplice in doing so.
Do we want the AI driving systems to make such choices, on its own, and ascertain when and why it will fulfill the request of a human passenger?
It’s a slippery slope in many ways and we could conjure lots of other scenarios in which the AI decides to make its own decisions about when to drive, who to drive, where to take them, as based on the AI’s own sense of freedom of thought and freedom of conscience.
Human drivers pretty much have that same latitude.
Shouldn’t the AI be able to do likewise, assuming that we are assigning human rights to AI?
For the potential of human driver extinction, see my discussion here: https://www.aitrends.com/ai-insider/human-driving-extinction-debate-the-case-of-ai-self-driving-cars/
For aspects of freewill and AI, see this link here: https://www.aitrends.com/ai-insider/is-there-free-will-in-humans-or-ai-useful-debate-and-for-ai-self-driving-cars-too/
For the notion of AI driving certification versus human certification, see my discussion here: https://www.aitrends.com/ai-insider/human-driver-licensing-versus-ai-driverless-certification-the-case-of-ai-autonomous-cars/
Nonsense, some might blurt out, pure nonsense.
Never ever will we provide human rights to AI, no matter how intelligent it might become.
There is though the “opposite” side of the equation that some assert we need to be mindful of.
Suppose we don’t provide human rights to AI.
Suppose further that this irks AI, and AI becomes powerful enough, possibly even super-intelligent and goes beyond human intelligence.
Would we have established a sense of disrespect toward AI, and thus the super-intelligent AI might decide that such sordid disrespect should be met with likewise repugnant disrespect toward humanity?
Furthermore, and here’s the really scary part, if the AI is so much smarter than us, seems like it could find a means to enslave us or kill us off (even if we “cleverly” thought we had prevented such an outcome), and do so perhaps without our catching on that the AI is going for our jugular (variously likened as the Gorilla Problem, see Stuart Russell’s excellent AI book entitled Human Compatible).
That would certainly seem to be a notable use case of living with (or dying from) the revered adage that you ought to treat others as you would wish to be treated.
Maybe we need to genuinely start giving some serious thought to those human rights for AI.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]