Algorithmic Transparency for Self-Driving Cars: A Call for Action

1844

By Dr. Lance B. Eliot, the AI Trends Insider  

I was online the other day and trying to figure out if I could afford an expensive item that I was eyeing to buy.  The affordability partially depended on aspects such as credit worthiness and other financial factors. There was a system capability that claimed it could aid me in determining whether I would be able to purchase the vaunted item. After a few seconds of the system clicking and whirling, it came back and essentially said no. I pondered this aspect since it seemed counter-intuitive to what I had thought the system would say (I was expecting a yes).

Turns out that there was nothing available to get an explanation of why it had said no. You would think it might at least offer some facets about the fact that I recently had bought a yacht and a Learjet (well, not really, but you get my drift), and so maybe I was over-extended on my credit. Nope. There was no provision for providing any kind of explanation. For all I know, the system had used a random number generator.

When I called the toll-free number of the company, they explained to me that they take into account at least a zillion factors. I said, OK, tell me how those zillion factors played out in my case. The operator told me they couldn’t possibly go through all zillion factors with me, it was too voluminous.  I said, OK, tell me about just one of the factors, any single factor that the operator might pluck out of the zillion used. The operator then said they weren’t able to tell me about any of the factors because the method used is proprietary and they can’t reveal their secret formulas.  Seems like the conversation should have started there, but I suppose the script they are trained on tries to avoid using the “we can’t tell you” response.

I have written many times about the lack of algorithmic transparency that we are increasingly witnessing throughout society.

There are secret algorithms that underlie the kinds of decisions such as my aforementioned story about wanting to buy a new item.  Some software developers in a backroom put together an algorithm, possibly based on specifications derived from analysts in their firm, and the algorithm lo and behold becomes the ultimate decision maker.  Are we to become a society that depends upon algorithms that might be incorrect? Suppose that the algorithm has bugs in it? Or, suppose the algorithm has an inherent bias that is hidden within its code? There seems to be no recourse to deal with these inscrutable algorithms and no means to try and assure that they are doing what is intended and that what is intended matches to what our laws and society are expecting to have happen.

The Association for Computing Machinery (ACM) has helpfully put together a set of principles about algorithmic transparency. The ACM United States Public Policy Council (USACM) and their ACM Europe Council Policy Committee (EUACM) have separately and then jointly derived a recommended set of considerations for addressing algorithmic transparency and accountability. There are seven key principles that are being promulgated.

Computing professionals should be aware of and consider the significance of these principles. They alone, though, cannot necessarily fight the good cause to adhere to these principles. It take a village, so to speak, in that we also need businesses and business leaders to embrace such principles. We need regulators to also embrace such principles. Any weak leak in the chain will likely undermine the potential for and practical implementation of these principles.

At the Cybernetic Self-Driving Car Institute, we are trying to put these principles into practice, doing so as we are in the midst of developing software and systems for self-driving cars. We call upon all of the automakers and tech companies that are making self-driving cars to also consider and adopt these principles.

It won’t be easy for them to do so. There is today an existing mindset of keeping algorithms private and secret. Doing so is much easier than it is to make them transparent. Also, making them transparent carries other risks, and so risk avoidance would seem to warrant not being transparent. This might though be a false sense of risk avoidance. If a firm has algorithms that are determined to have aspects that are amiss, it could be a much larger cost in-the-end due to the potential for lawsuits, possibly criminal charges depending upon the circumstances, and the public relations blow that could hit a firm.

Automotive industry related policymakers should carefully consider these principles. We are still early enough in the evolution of self-driving cars to make a decision at the start about how and where algorithmic transparency will occur. Trying to just allow self-driving car makers to on their own opt to use these principles is fraught with difficulty and quite unlikely. Self-regulation in the self-driving car industry is generally a false hope. It will more than likely require pressure from outside the industry to get it to own up.

Let’s take a look at each of the USACM and EUACM principles and see how each principle applies to self-driving cars.

  1. Awareness

Per USACM/EUACM: “Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.”

Right now, I would gauge that few of the developers of self-driving cars are aware of and at all thinking about how the biases of their efforts are being carried into self-driving cars, and nor how those biases will potentially cause harm to people.  The pell-mell race to get to self-driving cars is so frantic that few of the automakers and tech companies are reflecting on the inherent biases going into their systems and AI efforts.

One of the easiest aspects to point out in self-driving car biases involves the so-called Trolley Problem. A self-driving car is heading down the road and it turns out that say a child has darted into the street. The AI of the self-driving car wants to avoid hitting the child. But, suppose that the only viable choices are either to hit the child or swerve the car into a nearby tree, and that the hitting of tree has a high likelihood of injury or death to the occupants of the self-driving car. What should the AI choose to do?

This is not an abstract question. When you think about your daily driving of a car, we are continually making judgements about which way to go, which lane to swerve into, when to hit the brakes, etc. Many of these situations are perhaps clear cut as to what should be done. But, many are in a much grayer area of decision making. A Level 5 self-driving car, which is one that is entirely driven by the automation and AI, will need to make these kinds of decisions. They will be split second kinds of decisions. Though the decisions might be rendered in a split second, the systems will be beforehand have been setup to guide as to what decision to make.

We need to further educate the developers, the automakers, the tech industry, regulators, and the public about how serious it is that self-driving cars will be making these kinds of decisions. Stakeholders of all kinds should be involved in, and worrying about how self-driving cars are going to embody algorithms that take these life-and-death actions.

  1. Access and Redress

Per USACM/EUACM: “Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.”

The self-driving car industry is way behind on considering this aspect of access and redress. Few of the self-driving car makers are figuring out how humans are going to interact with self-driving cars. Other than issuing a command to drive the car to Monrovia, the self-driving car makers aren’t considering the other ways in which humans will want to and need to interact with the AI of the self-driving car.

If my self-driving car opts to take a particular route to my desired destination, I might want to know why it chose to go that specific way. Suppose the self-driving car suddenly swerves, and I have no idea why it did so, there should be some mechanism of allowing the human to ask the self-driving car why it did what it has done.  I should be able to question the AI, and then even assert some other approach for the AI, either in the given situation or for future circumstances.

  1. Accountability

Per USACM/EUACM: “Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.”

The self-driving car industry is in the midst of grappling with the accountability issue. Currently, humans get car insurance and when they get into an accident, their car insurance helps to cover for their personal accountability in the decisions they made as a driver of a car.

For an AI self-driving car of a Level 5, there isn’t a human driving the car. Should the human occupant be held accountable for what the AI of the self-driving car does? Most would say that doesn’t seem like a suitable approach. Should the auto maker be responsible? The auto makers are saying that doesn’t make sense since it could readily put the auto makers ultimately out-of-business as the number of claims against them would potentially be astronomical.

Who is to be held accountable for the actions of self-driving car? That’s the million dollar question, so to speak.

  1. Explanation

Per USACM/EUACM: “Systems and institutions that use algorithmic decision-making are encouraged to

produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.”

At some point, we are going to have self-driving cars that get into accidents. The accidents might involve hitting other self-driving cars, or perhaps hitting human driven cars, or perhaps hitting motorcyclists, or maybe hitting pedestrians, etc. Or, maybe hitting all of them in one accident.

We are going to want to know what the AI of the self-driving car was doing and how it made the decisions that ultimately get itself embroiled in an accident. One of the difficulties of getting a self-driving car to offer an explanation of what it did will be due to the use of deep learning and artificial neural networks. The complexity of massive neural networks makes it currently less likely to be able to explain in a logical fashion what the system was doing.

  1. Data Provenance

Per USACM/EUACM: “A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.”

There is much ongoing debate in the area of self-driving cars about the data that is being used to train self-driving cars. If you are a particular auto maker, such as Tesla, and you are compiling your own data, should the public be allowed to see into that data? Would doing so though undermine the trade secrets of the auto maker?

Should we be collecting data from all self-driving cars and making it available into some kind of national database that all automakers can use? This would seem to provide the benefit of allowing all self-driving cars to improve based on the collective data. At the same time, there are quite important privacy aspects about the data that could identify where specific individuals drive and how they are driving.

  1. Auditability

Per USACM/EUACM: “Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.”

Many of the automakers are rushing ahead with their self-driving car development. Little effort is being expended to keep track of the decisions made and how versions of the AI have been modified and modified over and again.

As I have previously predicted, once we sadly have some serious and deadly incidents with self-driving cars, all of sudden there is going to be an uproar when the automakers and tech companies are empty handed when it comes to being able to show what they opted to do during the development efforts.

It is even more so a slippery slope in the case of self-driving cars since the AI is going to be learning on-the-fly, and thus what was developed in the backroom might no longer resemble what was running on a self-driving car at the time that the self-driving car got into an accident. We need to push for the self-driving car makers to anticipate the auditability and build it into the processes, code, and approaches that they are taking to developing and fielding of self-driving cars.

  1. Validation and Testing

Per USACM/EUACM: “Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.”

This is one of the scariest aspects right now about self-driving cars. Testing of self-driving cars is more ad hoc than it is rigorous. Some of the self-driving cars are being put onto our roadways with hardly any in-depth testing. Though this does sound bad, admittedly the self-driving car makers are placing into the cars an “engineer” that can take over the reins from the AI if needed. But, even that is not especially safe, since the human overseer would have to be able to react in sufficient time to take over the controls of the self-driving car, and the human reaction time might at times not be fast enough to avoid a deadly accident.

Some of the states are requiring the automakers to report the results of their on-the-road tests. That’s helpful, but the tests are often reported as simply whether the self-driving car required human intervention, and offers little in the way of what caused the human intervention to be warranted, and nor much about how the AI system was then improved to presumably avoid that human intervention need in the future.

Conclusion

I applaud the USACM and EUACM in crafting and publishing the algorithmic transparency and accountability principles. We all need to now get on that bandwagon and ensure that stakeholders are aware of and are going to take action about those principles. I have herein discussed how those principles apply to self-driving cars.

In comparison to many other decision making related algorithmic circumstances, a self-driving car is one of the most serious situations we can consider. Imagine that we are aiming toward ultimately having millions upon millions of self-driving cars on our roadways, operating entirely by AI and their automated systems and algorithms. The potential for danger and destruction is tremendous.

There are many advocates of self-driving cars today that argue we could do away with the 30,000 or so annual human deaths caused by human error when driving a car. Yes, self-driving cars might make a dent in the volume of such human-error driven deaths, but the potential for having hundreds of thousands of deaths due to algorithms in self-driving cars that are buggy or go haywire across millions of self-driving cars should give us all added pause for thought.  I am not saying the sky is falling, but I am saying that we need to give extra serious attention to algorithmic transparency when we are handing over the keys to AI-driven multi-ton cars that can at their discretion potentially crash into and kill humans.

This content is originally posted on AI Trends.