Global Moral Ethics Variations and AI: The Case of AI Self-Driving Cars

1280

By Lance Eliot, the AI Trends Insider

We are not all the same. In Brazil, they eat winged queen ants that they fry or dip into chocolate. In Ghana, they eat termites in rural areas, which provide proteins, fats, and oils into their diets. Thailand is known for munching on grasshoppers and crickets, doing so in the same manner that Americans might snack on nuts and potato chips. Generally, things that people are eating in one part of the world can be considered icky in another part of the world. Your sensibilities about what is okay to eat and what is verboten or repulsive to eat is greatly shaped by your cultural norms.

Let’s agree then that there are international differences among peoples. There is no single food-eating code that the entire world has reached an agreement to abide with. Is it wrong to eat termites or ants, in the sense that if your cultural norm is to not eat those creatures, must it be “wrong” for other peoples to do so? You might sneer at such eating habits, and yet if you are routinely eating chicken or burgers, why isn’t it equally permissible for others to look down upon your choice of food. Perhaps they might consider those chicken sandwiches you devour to be outlandish, outrageous, and out-of-sorts.

You might say that we are making ethical or moral decisions about what we believe is proper to eat and what is not proper to eat. One dimension of this ethical or moral judgment is based on what your cultural norm consists of. Another dimension could be to try and include a scientific basis such as asserting that one type of item has more dietary advantages over another. There is an economic dimension too, since it could be that the economically viable choices are based on what resources exist near to the people that consume the items and so they are choosing to eat that which has the lower cost to obtain.

Eating is actually serious business. The will and strength of the people can greatly depend upon their stomachs. There are many people in the world that do not get enough food, or they get food that is insufficient for sustainable long-term health. It is easy to take food for granted in some parts of the world where it is relatively plentiful and affordable. Food is a basic sustenance of life. You could say that it has life-or-death consequences, though it can be hard to see that aspect on a day-to-day basis and it is not necessarily obvious to the eye unless you are among those that do not have food or have inadequate kinds of food.

I bring up the ethical underpinnings about food to help bring attention to something else that also involves ethical and moral elements, but for which at first glance it might not seen to do so.

Automated systems and the emergence of widespread applications of Artificial Intelligence (AI) are also laden with ethical and moral conundrums.

For most AI developers, they are likely steeped in the technology of trying to craft AI applications, for which the ethical and moral elements are not quite so apparent to them. When you are challenged with seeing if you can get that complex Machine Learning or Deep Learning system to work correctly, your focus becomes solving that problem. It’s what is exciting to do and usually via your training and education it is the technology that is the primary focus for you.

When I used to be a university professor teaching computer science and AI classes, I found that trying to include aspects of the ethical or moral considerations often generated backlash, in spite of the rather bland manner of simply raising awareness that the tech being built could have ethical and moral consequences. The mainstay of the backlash was that for every minute of class time spent on discussing the ethical or moral aspects was a minute less devoted to honing the technical skills and capabilities of the students. The key, I was told, was to ensure the students had the highest and purist form of technical skills, and the assumption was that any ethical or moral elements involved would be either self-evident to them or it was something that would come up later on, once they became practitioners of their craft.

Today, we’ve recently seen the backlash against some of the major social media firms and the online search firms for how their technology seems to imbue ethical or moral aspects. At times, these firms have offered that they are merely technologists and the technology speaks for itself, so to speak. If one assumes that the AI developers weren’t purposely embedding ethical and moral sentiments, it nonetheless does not provide an escape from the aspect that those embeddings may exist. In other words, whether purposely placed or not, if they are there it is something that the rest of the world will assert that something needs to be done about it.

And so there is a move afoot to try and inspire AI developers and firms making and promulgating AI systems to become more cognizant of the ethical and moral elements in such systems. For those that didn’t think about it before and merely let things happen by perchance or happenstance, this kind of out-of-mind rationalization is gradually disappearing as an excuse for producing an AI system that does have ethical or moral elements and yet for which no overt effort was made to contend with them.

For my article about calls for transparency in AI systems, see: https://www.aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/

For the potential importance of internal AI naysayers, see my article: https://www.aitrends.com/selfdrivingcars/internal-naysayers-and-ai-self-driving-cars/

For the emergence of ethics review boards related to AI systems, see my article: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For my article about how AI developer groupthink can go awry, see: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

Let’s combine together the aspects of AI systems that have ethical or moral elements and/or consequences with the notion that there are international differences in ethics and moral choices and preferences.

If you are an AI developer in country X, and you are developing an AI system, you might fall into the mental trap of crafting that AI system as based on your own cultural norms of being in country X. This means that you might by default be embedding into the AI system the ethics or moral elements that are let’s say acceptable in that country X.

This at first might not be even noticed by you. You are doing this without any particular conscious thought or attempt to bias the AI system. It is merely a natural consequence of your ingrained cultural norms as a member of country X. It would be the same as making a system that has as a list of proper foods to eat things like say chicken and burgers. It doesn’t even occur to you to add to the list things like ants or termites. In this case, you’ve silently and unknowingly carried your cultural norm into the AI system.

I’ve developed quite a number of global systems that had to work throughout the world, and in so doing, I’ve often been faced with taking an existing system that was successful in say the United States and trying to make it usable in other countries too. It can be challenging to retrofit something to accommodate other cultures and peoples. The number of concrete-like features and assumptions in an AI system can be so deeply embedded that you almost need to start over, rather than simply trying to make adjustments here and there.

I’ve written and spoken extensively about the internationalizing of AI, of which the ethics and morals dimension are often regrettably neglected by AI developers and AI firms. It is relatively easy to modify an AI system so that it makes use of another language, such as switching it from using English to using Spanish or German as a language. You can also relatively easily change the use of dollar amounts and make them into other forms of currencies. These are the somewhat obvious go-to aspects when trying to internalize software.

For my article about internationalizing AI, see: https://www.aitrends.com/selfdrivingcars/internationalizing-ai-self-driving-cars/

Ferreting Out Deeply Embedded Ethics and Morals Elements

The tricky part is ferreting out the ethics and morals elements that are perhaps deeply embedded into the AI system.

You need to figure out what those elements are, which might not have ever come up previously regarding the system and therefore the initial hunch is that there aren’t any such embeddings. Usually, once the realization becomes more apparent that there are such embeddings, it then becomes an arduous chore of identifying where those embeddings are, along with what kind of effort and cost will be required to change them.

Even more so as a difficulty is often deciding what to change those embeddings to, regarding what is the appropriate target set of ethics and morals embeddings.

Part of the reason that figuring out the desired target of ethics and moral embeddings is that you often didn’t do so at the start anyway. In other words, you never initially had to endure the difficulty of trying to figure out what ethics and moral embeddings you were going to put into the AI system. As such, now that you found them, trying to identify how to change them will finally bring to the surface the hard choices that need to be made.

There is another factor too that comes to play, namely whether the AI system is a real-time one, and whether it has any serious or severe consequences in what it does. The more that the AI system operates in real-time and has potential life-or-death choices to make, if this also dovetails into the ethics or moral embeddings realm, it is a twofer. The ethics or moral embeddings are of a greater significance, whether the AI developer realizes it or not, because life-or-death results can occur and do so as a result of those hidden ethics or morals embeddings.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Auto makers and tech firms are faced with the dilemma of how to have the AI make life-or-death driving choices, and these choices could be construed as being based on ethics or morals elements, of which those can differ by country and culture.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of ethics and moral elements embedded in AI systems, let’s take a closer look at how this plays out in the case of AI self-driving cars and especially in a global context.

Those within the self-driving car industry are generally aware of something that ethicists have been bantering around called the Trolley problem.

Philosophers and ethicists have been using the Trolley problem as a mental experiment to try and explore the role of ethics in our daily lives. In its simplest version, the Trolley problem is that you are standing next to a train track and the train is barreling along and heading to a juncture where it can take one of two paths. In one path, it will ultimately strike and kill five people that are stranded on the train tracks. On the other path there is one person. You have access to a track switch that will divert the train from the five people and instead steer it into the one person. Would you do so? Should you do so?

Some say that of course you should steer the train toward the one person and away from the five people.

The answer is “obvious” because you are saving four lives, which is the net difference of killing the one person and yet saving the five people. Indeed, some believe that the problem has such an apparent answer that there is nothing ethically ambiguous about it at all.

Ethicists have tried numerous variations to help gauge what the range and nature of our ethical decision-making is. For example, suppose I told you that the one person was Einstein and the five people were all evil serial killers. Would it still be the case that the saving of the five and the killing of the one is so easily ascertained by the sheer number of lives involved?

Another variable manipulated in this mental ethical experiment involves whether the train is by-default going toward the five people or whether it is by-default going toward the one person.

Why does this make a difference? In the case of the train by default heading toward the five people, you must take an overt action to avoid this calamity and pull the switch to divert the train toward the one person. If you take no action, the train is going to kill the five people.

Suppose instead that the train was by default heading toward the one person. If you decide to take no action, you have already in essence saved the five people, and only if you actually took any action would the five be killed. Notice how this shifts the nature of the ethical dilemma. Your action or inaction will differ depending upon the scenario.

We are on the verge of asking the same ethical questions of AI self-driving cars. I say on the verge, but the reality is that we are already immersed in this ethical milieu and just don’t realize that we are. What actions do we as a society believe that a self-driving car should take to avoid crashes or other such driving calamities? Does the Artificial Intelligence that is driving the self-driving car have any responsibility for its actions?

One might argue that the AI is no different than what we expect of a human driver. The AI needs to be able to make ethical decisions, whether explicitly or not, and ultimately have some if not all responsibility for the driving of the car.

Let’s take a look at an example.

Suppose a self-driving car is heading down a neighborhood street. There are five people in the car. A child suddenly darts out from the sidewalk and into the street. Assume that the self-driving car is able to detect that the child has indeed come into the street.

The AI self-driving car is now confronted with an ethical dilemma akin to the Trolley problem. The AI of the self-driving car can choose to hit the child, likely killing the child, and save the five people in the car since they will be rocked by the accident but not harmed, or the self-driving car’s AI can swerve to avoid the child but doing so puts the self-driving car onto a path into a concrete wall and will likely lead to the harm or even death of many or perhaps all of the five people in the car. What should the AI do?

Similar to the Trolley problem, we can make variants of this child-hitting problem. We can make it that the default is that the five will not be killed and so the AI must take an action to avoid the five and kill the one. Or, we can make the default that the AI must take action to avoid the one and thus kill the five. We are assuming that the AI is “knowingly” involved in this dilemma, meaning that it realizes the potential consequences.

When people are asked what they would do, the answer you get will greatly depend upon how you’ve asked the question.

Abstracting Vs. Naming Individuals in an Ethical Dilemma

One of the most significant factors that seems to alter a person’s answer is whether you depict the problem in an abstract way without offering any names per se versus if you tell the person that they or someone they know is involved in the scenario.

In the case of the problem being abstract, the person seems likely to answer in a manner that offers the least number of deaths that might arise. If you tell the person that they are let’s say inside the self-driving car, they tend to shift their answer to aim at having the car occupants survive. If you tell the person they are outside the self-driving car and standing on the street, and will be run over, they tend to express that the AI self-driving car should swerve, even if it means the likely death of some or all of the self-driving car occupants.

I mention this important point because there are a lot of these kinds of polls and surveys that seem to be arising lately, partially because AI self-driving cars continue to increase in attention to society, and the manner of how the question is asked can dramatically alter the poll or survey results. This explains too why one poll or survey appears to at times have quite different results than another.

For my article about the trust perceptions of AI self-driving cars by the public, see:https://www.aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

For the rise of public shaming of AI self-driving cars via social media, see my article:https://www.aitrends.com/selfdrivingcars/public-shaming-of-ai-systems-the-case-of-ai-self-driving-cars/

You also need to consider who is answering these poll or survey questions.

There is a famous example of how you can inadvertently enlist bias into a survey or poll by whom you select to take it.

In 1936, one of the largest ever at-the-time polls was conducted by a highly respected magazine called The Literary Digest, involving calling nearly 2 ½ million people in the USA to ask them whether they were going to vote for Alfred Landon or Franklin D. Roosevelt for president. The poll results leaned toward Landon and thus The Literary Digest predicted loudly that Landon would win (he did not).

There were at least two problems with the survey approach.

One is that they used a telephone as the medium to reach people, but at the time those that could afford to own a phone were generally the upper-income of society and therefore the survey only got their opinions, having omitted much of the bulk of the voters. Secondly, they started with a list of 10 million names and were only able to reach about one-firth, which implies a non-response bias. In other words, they only talked with those that happened to answer the phone and failed to converse with those that did not happen to answer the phone. It could be that those that answered the phone were a select segment of the larger group for which the survey had hoped to reach and thusly biased the results accordingly.

I hope that you will keep these facets in mind whenever hearing about or reading about a survey of what people say they would do when driving a car. How were the people contacted? What was the depiction of the scenarios? What was the wording of the questions? Was there a nonresponse bias? Was there a selection bias? And so on.

Another facet involves whether or not the people responding to the questions take the poll or survey seriously. If someone perceives the questions to be silly or inconsequential, they might answer off-the-cuff or maybe even answer in a manner intended to purposely shock the responses or distort the results. You have to consider the motivation and sincerity of those responding.

In the case of AI self-driving cars, there has been an ongoing large-scale effort to try and get a handle on the ethics and moral aspects of making choices when driving a car, via an online experiment referred to as the Moral Machine experiment.

A recent recap of the results accumulated by the online experiment were described in an issue of Nature magazine and indicated that around 2.3 million people had taken the survey. The survey presented various scenarios akin to the Trolley problem and asked the survey respondent what action they would take. There were over 40 million “decisions” that these two million or so respondents rendered in undertaking the survey. Plus, it was undertaken by respondents from 233 countries and territories.

Before I go over the results, I’d like to remind you of the various limitations and concerns about any such kind of survey. Those that went to the trouble to do the online survey were a self-selected segment of society. They had to have online access, which not everyone in the world yet has. They had to be aware that the online survey existed, of which not many people that are online would have known about. They had to be willing to take the time needed to complete the survey.  Etc.

We also need to guess that they hopefully took the survey seriously, but we cannot know for sure. How many of the respondents thought it was a kind of game and didn’t care much about how they answered? How many answered by just clicking buttons and did not give due and somber thought to their answers? How many would change their answers if we altered the depictions of the scenarios and got them to assume that they themselves or a dear loved one was involved in the scenarios?

It’s up to you whether you want to toss out the baby with the bathwater in the sense of opting to disregard entirely the results of this interesting online experiment. Admittedly it is hard to just place it aside, given the large number of respondents. Of course, merely that it garnered a lot of responses does not ergo make it valid. You can always get Garbage-In Garbage-Out (GIGO), no matter whether you have a small number or a vast number of responses.

Similar to the Trolley problem, the respondents were confronted with an unavoidable car accident that was going to occur. They were to indicate how an autonomous AI self-driving car should react. I point out this facet since many studies have tended to focus on what the person would do, or what the person thinks other people ought to do, and not per se on what the AI should do.

A fundamental question to be pondered is whether people want the AI to do something other than what they would want people to do.

Often times, these studies assume that if you say that the AI should swerve or not swerve, you are presumably also implying that if it was a person in lieu of the AI that was driving the car, the person is supposed to take that same action. But, perhaps people perceive that the AI should do something for which they don’t believe people would do, or maybe even could do.

I’ll give you an extreme example, which might seem contrived, but please accept the example to serve as a showcase of how there might be a different kind of viewpoint about the AI as a driver versus a person as a driver. I tell someone that the scenario involves a parent driving a car, meanwhile the only daughter of the parent has wandered into the street, and the parent regrettably has only a split-second to decide whether to swerve and ram into a wall that will end-up killing the parent, yet doing so will spare the daughter (otherwise, the car will ram and kill the daughter).

That’s a tough one, I’d dare say, at least for most people. Can you tell a parent to proceed with killing their own child? I added too that it was the only daughter, which presumably might further increase the agony of the situation.

Let’s now augment the scenario and say that the car contains another person. We now have two people in the car, and one person out on the street. If the parent was the only person in the car, I suppose it might be “easier” to say that the parent would or should sacrifice themselves for the life of their child. Now with the change in the scenario, the parent is going to have to make a decision that will also kill the passenger in the car.

Here’s where the AI as a driver might enter into the picture. Would your answer about whether the parent should swerve the car differ if the AI was driving the car in this augmented version of the scenario?

If the AI was driving the car and there were no human occupants, I’d suppose we would all likely agree that the AI ought to swerve the car, even if it means smashing into a wall and destroying the car and the AI. Until or if we ever have sentient AI, I don’t think we are willing to equate the AI as somehow an equivalent to a human life.

For the potential of the AI singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the potential rise of super-intelligent AI, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my article about the Turing Test of AI, see: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

If there is one human passenger in the self-driving car, this implies that the AI will need to make a decision about whether to spare the life of the passenger or to spare the life of the child. Is your answer different if the driver was the parent? I suppose you could say that the case of the parent with a passenger involves two human lives inside the car, while the non-AI instance of the parent driving the car does involve two human lives inside the car.

Of course, the AI driving in a true Level 5 AI self-driving car means that we won’t have a human driver as a count of the number of humans inside the car. This adds another twist too to the scenarios. It means that you can have a car that contains only children. I mention this because the usual scenario about the car swerving involves having one or more adults in the car, which would be required in the less than Level 5 scenarios.

In any case, let’s try to even out the body counts in case that’s your primary focus on making a decision. We’ll put the parent in the self-driving car as a passenger, and the AI is driving the car. Should the AI swerve to save the child on the street and in which case it kills the parent?

Would your answer change if I removed the aspect that it was a child of the parent that was standing on the street and said it was some child that the parent did not know? Suppose I said the person standing on the street was an adult and not a child? Suppose I told you that the parent was standing in the street and the child of that parent was in the AI self-driving car?

As you can see, there are a dizzying number of variants and each such variant can potentially change the answer that you might give.

For the large-scale online experiment, here’s the kinds of scenarios it used:

  •         Sparing humans versus sparing animals that are presumed to be pets
  •         Staying on course straight ahead versus swerving away
  •         Sparing passengers inside the car versus pedestrians on the roadway
  •         Sparing more human lives versus fewer human lives
  •         Sparing males versus females
  •         Sparing young people versus more elderly people
  •         Sparing legally-crossing pedestrians versus illegally jaywalking pedestrians
  •         Sparing those that appear to be physically fit versus those appearing to be less fit
  •         Sparing those with seemingly higher social status versus those with seemingly lower status

They also added aspects such as in some cases the pretend people depicted in the scenarios were labeled as being medial doctors, or perhaps wanted criminals, or stating that a woman was pregnant, and so on.

These factors were combined in a manner to provide 13 accident pending scenarios to each respondent.

There was also an attempt to collect demographic data directly from the respondents, such as their gender, their age, income, education level, religious affiliation, political preference, and so on.

What makes this study rather special, besides the large-scale nature of it, involves the aspect that the online access was available globally.

This potentially provides a glimpse into the international differences that might come to play in the Trolley problem answers. To-date, most studies have tended to be done within a particular country. As such, it has made it harder to try and compare across countries, which means it has been difficult to compare across cultures, which means it likewise has tended to be difficult to compare across ethics and moral norms.

As an aside, I am not saying that a country is always and only one set of ethics and moral norms. Obviously, a country can contain a diversity of ethics and moral norms. Nonetheless, one could suggest that by-and-large a country in the aggregate is likely to exhibit an overall set of ethics and norms.

The researchers used a statistical technique known as the Average Marginal Component Effect (AMCE) to study the attributes and their impacts. You can quibble about the use of this particular statistical technique, though I’d argue that there are more pronounced quibbles about the selection biases and other factors that more worthwhile to quibble about.

Well, you might wonder, what did the results seem to show?

Humans Over Pets, for the Most Part

Respondents tended to spare humans over pets.

I know you might think it should be 100% of humans over pets, but that’s not the case. This could be interpreted to suggest that the life of an animal is considered by some cultures and ethics/morals as the equal to a human. Or, it could be that some weren’t paying attention to the scenario. Or, it could be that the respondent was fooling around. There are a multitude of interpretations.

You might find of interest that Germany had undertaken a study in 2017 that produced the German Ethics Commission on Automated and Connected Driving report, and rule #7 states that a human life in these kinds of AI self-driving car scenarios is supposed to have a higher priority than do animals.

Should that be a universal principle adopted worldwide and be considered the standard for all AI self-driving cars, wherever those AI self-driving cars might be deployed?

For some, this seems like a no-brainer rule and I’m betting they would say that of course such a rule should be adopted. I’d dare say though that you might find that not everyone agrees with that type of rule.

Overall, these kinds of rules are very hard to get people to discuss, let alone to reach agreement about.

For AI developers, they find themselves between a rock and a hard place. On the one hand, no one seems quite willing to come up with such rules, and yet the AI developers are either by default or by intent going to be embedding such “rules” into their AI systems of their self-driving cars. Down the road (pun!), there will likely be public backlash about how these rules got decided and why they are inside of these AI systems.

For regulations about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For the burnout of AI developers, see my article: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For reverse engineering the AI of self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For my article about ridesharing and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

The auto makers and tech firms would likely say that if they waited to try and produce AI self-driving cars until the world caught up with figuring out these ethics/morals rules, we probably wouldn’t have AI self-driving cars until a hundred years from now, if ever, since you would have a devil of a time with getting people to come together and reach agreement on these rather thorny matters.

Meanwhile, the push and urge to move forward with AI self-driving cars is still moving ahead. Some suggest that AI self-driving cars need to be have a disclosure available as to what assumptions were made in the AI in terms of these kinds of ethics/morals rules. Presumably, if you buy a Level 5 AI self-driving car, you ought to get a full disclosure statement that inform you about these embedded rules.

What about when you get into a ridesharing AI self-driving car?

Some would say that you ought to receive a list of the same kinds of disclosures. Since your life and the lives of others are at stake, you ought to be informed as to what the AI self-driving car is going to potentially do. You might choose to use someone else’s ridesharing AI self-driving car that has a different set of ethics/morals rules, because it better aligns with your own viewpoints.

Indeed, it is believed that ultimately, we might see AI self-driving cars being marketed based on the kinds of ethics/morals rules that a particular brand or model encompasses. If you want the version that considers animals to be the equal to humans, you can get the auto maker Y’s brand or model, otherwise you’d get auto maker Z’s brand or model.

I realize that some would claim that with the Over-the-Air (OTA) electronic communication capability, presumably the auto maker or tech firm can via the cloud merely send an update or patch to your AI self-driving car so that it embodies whatever set of ethics/morals rules you prefer. I don’t think this is going to be so easy as you think. Besides the technological aspect of doing so, which can be figured out, though for most of today’s AI self-driving cars its going to be quite a retrofit to make this viable, you have other societal questions to deal with.

For more about OTA, see my article: https://www.aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For the affordability of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/

For my article about the marketing of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

For my argument that AI self-driving cars won’t be an economic commodity, see: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/

Maybe you live in a community that believes humans and animals should be considered equal. Maybe you don’t see things this way.

Meanwhile, you purchase an AI self-driving car that has an embedded rule that humans are a higher priority than animals. This aligns with your personal sense of ethics and morals. You want to have your AI self-driving car parked at your home in your community and have it drive you throughout your community. You also want to make some money with your AI self-driving car and so you have it work as a ridesharing AI self-driving car when you are not using it.

The community bans the use of that particular model/brand of AI self-driving car. They won’t let it be used on their roads. Yikes, you are caught in quite a bind. Even if the auto maker or tech firm has an easy plug-in that can be sent via OTA to implant AI rules about humans and animals being considered equals, you don’t share that belief.

I hope I’ve made the case that we are heading towards a showdown about the ethics/morals embedded rules in AI self-driving cars. It isn’t happening now because we don’t have true Level 5 AI self-driving cars. Once we do have them, it will be a while before they become prevalent. My guess is that no one is going to be willing to put up much effort and energy to consider these matters until it becomes day-to-day reality and people realize what is occurring on their streets, under their noses, and within their eyesight.

Note that I got us into this whole back-and-forth discussion by merely the topic of the online experiment responses regarding humans versus animals. Imagine how many other such globally variant perspectives and issues that are that have yet to be identified and debated!

Let’s take a look at some more results of the Moral Machine online experiment.

Respondents tended to spare humans by saving more rather than less.

This is the classic viewpoint of all human lives being equal and so it becomes whether or not the number of lives lost can be made less than the number of lives saved. As mentioned earlier, this can be potentially altered in terms of responses based on whether the person believes themselves to be in the “lost” versus the “saved” segment of the scenario. It can also differ if a loved one or someone that you know is considered in one of the segments or the other.

Another factor can be the age of the people in the pretend scenarios.

Generally, the respondents tended to spare a baby, or a little girl, or a little boy, more so than adults.

For some of you, depending upon your culture and ethics/morals, you might contend that it is best to spare a child over an adult, perhaps since you might say that the adult has already lived their life and the child has yet to do so. Or it could that you simply believe that longevity is the key, and an adult has statistically less years left to live over a child.

I’d bet that there are others of you that depending upon your culture and ethics/morals would assert that the adult should usually be the one spared. The adult can readily produce another child. Children historically in the world have been considered at risk in terms of having larger birth rates to deal with the perishing of children due to natural survival aspects. Some say this is why there has been declining birth rates for industrialized nations for which the child survival rates tend to be higher.

I am not trying to resolve the question about age as a factor. I am instead attempting to emphasize that it is yet another unsolved problem. It is unsolved meaning that an AI developer has no means to seemingly know what or how they should direct the AI system to act or react in such circumstances.

An AI self-driving car is driving down the street. Via the cameras and visual image processing, it detects a baby has crawled into the street. At this juncture, should the AI consider this to be a human, and set aside the age aspect, meaning ignore that it is a baby. Of course, when I say ignore, don’t go whole hog, in the sense that the AI ought to be programmed to realize that a baby crawls and doesn’t run, and therefore be able to predict the movements of the baby.

I am saying “ignore” in the sense that if the AI needs to balance the lives of a choice about swerving the self-driving car, and if there are say two passengers in the self-driving car, it now has a simple math of one human in the street versus two humans inside the self-driving car. Suppose too that the AI has already scanned the interior and has detected that the two passengers are adults.

Once again, we need to ask, should the AI not take notice of the age as a factor and instead merely count them as two people.

We have returned again to the “rules” that would be embedded into the AI system. Those of you that say there aren’t any rules in your AI system, this is a bit of a false or at best misleading claim. The omission of a rule means that the AI is going to end-up doing something and the unspecified “rule” is there whether it is explicitly stated or not.

Considering Age as a Factor in AI Action Planning Determinations

Suppose you are an AI developer and your AI system for your brand of self-driving cars merely counts people as people. There is no distinction about age. Guess what, you have a rule! You have left out age as a factor. Thus, you have a rule that people are counted only as people and that age is not considered.

You might complain that you never even contemplated using age. It did not occur to you to ponder whether age should be a factor in your AI system and its action planning determinations. Does this get you off the hook? Sorry, that won’t cut the mustard. There are those that will say that you should have considered including age. You apparently by default either consciously or not have made a decision that the AI will not include age in choosing among people when caught in an untoward situation.

Furthermore, imagine that some country decides they want to allow only AI self-driving cars in their country that do take into account the age of a person when making these horrific kinds of untoward decisions. I know some will say they could adjust their AI code with one line and it would then encompass the age factor. I doubt this. The odds are that there is a lot more throughout the AI system that would need to be altered, along with doing careful testing before you deploy such a life-or-death crucial new change.

For my article about AI code obfuscation, see: https://www.aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

For the testing and debugging of AI systems, see my article: https://www.aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

For my article about problems in AI systems, see: https://www.aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For the freezing robot problem, see my article: https://www.aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

There are a number of other results of the online experiment that are indicative of the difficult AI ethics/morals discussions we yet are to confront.

For example, there was a preference of sparing those more physically fit over those that were less physically fit.

How does that strike you? Some might be enraged. How terrible! Others might try to argue that in a Darwinian way that those more physically fit are best to survive. This is the kind of dividing line that definitely rankles us all and brings to the forefront our ethics/morals mores and preferences.

In statistically analyzing the individuals and their demographics, the researchers claim that there are only marginal differences in the rendered opinions of the respondents. For example, you might assume that perhaps male respondents might tend to prefer to save males more so than females, or maybe save females more so than males, but according to the researchers the individual variations were not striking. They suggest that the individual differences are theoretically interesting and yet not essential for policy making matters.

In terms of countries, the researchers opted to try and undertake a cluster analysis that incorporated Ward’s minimum variance method and used Euclidean distance calculations related to the AMCE’s of each country, doing so to see if there were any significant differences in the country-based results.

They came up with three major clusters, which they named as Western, Eastern, and Southern. The Western cluster was mainly encompassing countries that has Protestant, Catholic and Orthodoxy underpinnings, such as the United States and Europe. The Eastern cluster consisted of the Islamic and Confucian oriented cultures, including Japan, Taiwan, Saudi Arabia, and others. The Southern cluster was indicated as having a stronger preference result for sparking females in comparison to the Western and Eastern clusters, and encompassed South America, Central America, and others.

For AI self-driving cars, the researchers suggest that this kind of clustering might mean that the AI will need to be adjusted accordingly to those dominant ethics/morals in each respective cluster. If one is to assume that the three clusters are a valid means to consider this problem, it could be handy in that it might imply that there are only three major sets of AI “rules” that would need to be formulated to accommodate much of the globe. This seems like quite wishful thinking and I frankly doubt you can lump things together to make this problem into such an easy solution.

When I speak at conferences and bring up this topic of the AI ethics/morals underlying split second life-or-death decisions that an AI self-driving car might need to make, I often get various glib replies. Let me share those with you.

One reply is that we can just let people decide for themselves what kind of ethics/moral judgement the AI should make. Rather than trying to come up with overall policies and infusing those into the AI, just let each person decide what they prefer.

I inquire gently about how this would work. I get into a ridesharing AI self-driving car. It is a blank slate about the ethics/rules of what to do when it gets into an untoward situation. Somehow, the AI starts to ask me about my preferences. Do I care about humans versus animals, it asks. Do I care about adults versus children, it asks. Apparently, I am to be walked through a litany of such questions and once I’ve answered the questions, the AI will start to take me on my driving journey.

The person that has brought up this topic will usually say that I’ve been unfair in making it seem like a long wait before the ridesharing car would get underway. It could be that my smartphone would already have my driving preferences and it would convey those to the ridesharing AI self-driving car. Within the time it takes for me to sit down and put on my seat belt, the AI would already know what my preferences are and have them setup for the driving journey.

This sounds nifty. We are back though to the earlier example of a community that has decided they want to consider humans and animals to be equal in merit. Can I just drive in this AI self-driving car into their community and do so while knowing that my preferences violate their preferences?

My point is that these kinds of preferences are not about things like whether the self-driving car should honk its horn or not. These are life-and-death choices about what the self-driving car will do. It involves not just the person that happens to be in the self-driving car, but also has consequences for anyone else in nearby cars and for pedestrians and others.

Another comment I get is that these are dire driving scenarios that will never arise and the whole ethics/morals question is a bogus topic.

When I gently ask about this claim, the person making the remark will usually say that in the thirty years of their driving a car, they have never encountered such a scenario as having to choose between swerving to hit a child versus ramming into a wall with the car. Never. To them, these are wild conjectures. You might as well be discussing what to do when a meteor from outer space lands smack dab in front an AI self-driving car. What will the AI do about that?

I could point out that dealing with a sudden appearance of a meteor is actually something the AI system ought to already generally be able to handle. I’m not saying that there are AI developers right now programming self-driving cars to be on the watch for flaming meteors. If you consider that a meteor is simply an object that has suddenly appeared in front of the AI self-driving car, which could be equated to a tree limb that has been blown down by the wind or it could equated be a rooftop satellite dish that came tumbling down because of an earthquake, these are all aspects that the AI should be able to deal with.

It is debris that has appeared in front of the AI self-driving car. Until or if we had the human lives choices into the equation, this is just a maneuverability aspect of the AI trying to safety navigate around or otherwise deal with this object or obstacle.

For more about debris handling, see my article: https://www.aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/

For car caravans, see my article: https://www.aitrends.com/selfdrivingcars/traveling-in-vehicle-caravans-and-the-advent-of-ai-self-driving-cars/

For my article about AI driving in hurricanes and other natural disasters, see: https://www.aitrends.com/selfdrivingcars/hurricanes-and-ai-self-driving-cars-plus-other-natural-disasters/

For the importance of AI defensive driving, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For the AI pedestrian roadkill problem, see my article: https://www.aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

Anyway, I digress. Let’s get back to the notion that these scenarios of having to choose between one terribly bad outcome versus another terribly bad outcome is allegedly not realistic and won’t happen.

I try to emphasize to the person that just because in their thirty years of driving that they have not encountered such a situation is not appropriate cause to extrapolate that it never happens anywhere and to anyone else. Let’s suppose the person drives around 1,000 miles per month, which is the overall average in the United States. This means that over 30 years the person has driven perhaps 30 x 1000 x 12 miles, which calculates to about 360,000 miles in their lifetime so far.

We’d likely want to find out where this person has been driving. If they are driving in the same places most of the time, that’s another factor as to whether or not they might be experiencing these kinds of scenarios. In some areas it could happen frequently, in other areas only in a blue moon.

The thing is that there are about 3.22 trillion miles driven in the United States each year (according to the Federal Highway Administration). Over thirty years we might suggest it is about 100 trillion miles of driving. This particular person that made the remark has driven a teensy-weensy fraction of those miles. Their assumption that since they did not experience any such dire situation is a rather bold claim when compared to all of the driving that takes place.

A reasonable person would concede that these scenarios can happen, and they are not impossible. The next aspect is to then discuss whether they are probable or only possible. In other words, yes, they can happen, but they are perhaps very rare.

If you are willing to say that they happen, but are rare, you’ve now gotten yourself into a pickle. I mention this because if it can happen and if the AI encounters such a situation, what do you want the AI to do? Based on the belief that it rarely happens, are you saying that it is okay if the AI randomly makes a choice or otherwise does nothing systematic to make the choice? I don’t think we would want automation for which we know it will ultimately encounter dire situations, but we decided to not stipulate what is to occur.

I also would like to clarify that these extreme examples such as the Trolley problem are meant to spark awareness about the aspect that these overall such situations can arise. Don’t become preoccupied with the child in the street and the passengers in the self-driving car as an example. We can come up with many other such examples. Take a situation involving humans inside a car, and have that car come across a pedestrian, or several pedestrians, or a bicyclist, or a bunch of bicyclists, or another car with people in it, and so on.

When you take a moment to consider your daily driving, you are likely to realize that you are quite a bit making life-or-death decisions about driving and that those decisions encompass a kind of moral compass. The moral compass is based on your own personal ethics/morals, along with whatever the stated or implied ethics/morals are in the place that you are driving, and this all gets baked together into your mind as you are driving a car.

I’m not always successful in making the case to such doubters that we need to care about the ethics/morals rules and their embedding into AI systems. A tempest in a teapot is what some seem to believe, no matter what other arguments are presented. There are some too that believe it is a conspiracy of some kind, intended to either holdback the advent of AI self-driving cars or maybe trick us into letting AI self-driving cars on-their-own determine our ethics/morals for us.

For my article about the AI conspiracy theories, see: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

Conclusion

The advent of AI self-driving cars raises substantive aspects about how the AI will be making split-second decisions of a real-time nature involving multi-ton cars that can cause life-or-death consequences to humans within the self-driving car and for other humans nearby in either other cars or as pedestrians or in other states of movement such as via bicycles, scooters, motorcycles, etc.

We humans make these kinds of judgements while we are driving a car. Society has gotten used to this stream of judgements that we all make. The expectation is that the human driver will use their judgement as shaped around the culture of the place they are driving and as based on the prevalent ethics/morals therein. When someone gets into a car incident and makes such choices, we are often sympathetic to their plight since the person typically had only a split-second to decide what to do.

We aren’t likely to consider that the AI has an excuse that the decision made was time-boxed into a split second. In other words, the AI ought to have beforehand been established to have some set of ethics/morals rules that guide the overarching decision making and then in a moment when a situation arises, we would expect the AI to apply those rules.

You can bet that any AI self-driving car that gets into an untoward situation and makes a choice or by default takes an action that we would consider a form of choice, this is going to be second-guessed by others. Lawyers will line-up to go after the auto makers and tech firms and get them to explain how and why the AI did whatever it opted to do.

For my article about product liability and AI self-driving cars, see:  https://www.aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For the emergence of class action lawsuits and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

The auto makers and tech firms would be wise to systematically pursue the embodiment of ethics/morals rules into their AI systems rather than letting it happen by chance alone. The head-in-the-sand defense is likely to lose support by the courts and the public. From a business and cost perspective, it will be a pay me know or pay me later kind of aspect for the auto makers, namely either invest now to get this done properly or later on pay a likely much higher price that they didn’t do it right at the start.

Another way to consider this matter is to take into account the global market for AI self-driving cars. If you are developing your AI self-driving car just for the U.S. market right now, you’ll later on kick yourself that you didn’t put in place some core aspects that would have made going global a lot easier, less costly, and more expedient. In that sense, the embodiment of the ethics/rules needs to be formulated in a manner that would allow for accommodating different countries and different cultural norms.

The Moral Machine online experiment needs to be taken with a grain of salt. As mentioned, as an experiment it is suffers from the usual kinds of maladies that any survey or poll might encounter. Nonetheless, I applaud the effort as a wake-up call to bring attention to a matter that otherwise is going to be sadly untouched until it is at a point of becoming an utter morass and catastrophe for the emergence of AI self-driving cars. AI self-driving cars are going to be a kind of “moral machine” whether you want to admit it or not. Let’s work on the morality of the moral machine sooner rather than later.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.