Chief Safety Officers Needed in AI: The Case of AI Self-Driving Cars

1790

By Lance Eliot, the AI Trends Insider

Many firms think of a Chief Safety Officer (CSO) in a somewhat narrow manner as someone that deals with in-house occupational health and safety aspects occurring solely in the workplace. Though adherence to proper safety matters within a company are certainly paramount, there is an even larger role for CSO’s that has been sparked by the advent of Artificial Intelligence (AI) systems. Emerging AI systems that are being embedded into a company’s products and services has stoked the realization that a new kind of Chief Safety Officer is needed, one with wider duties and requiring a dual internal/external persona and focus.

In some cases, especially life-or-death kinds of AI-based products such as AI self-driving cars, it is crucial that there be a Chief Safety Officer at the highest levels of a company. The CSO needs to be provided with the kind of breadth and depth of capability required to carry out their now fuller charge. By being at or within the top executive leadership, they can aid in shaping the design, development, and fielding of these crucial life-determining AI systems.

Gradually, auto makers and tech firms in the AI self-driving car realm are bringing on-board a Chief Safety Officer or equivalent. It’s not happening fast enough, I assert, yet at least it is a promising trend and one that needs to speed along. Without a prominent position of Chief Safety Officer, it is doubtful that auto makers and tech firms will give the requisite attention and due care toward safety of AI self-driving cars.

I worry too that those firms not putting in place an appropriate Chief Safety Officer are risking not only the lives of those that will use their AI self-driving cars, but also putting into jeopardy the advent of AI self-driving cars all told.

In essence, those firms that give lip service to safety of AI self-driving car systems or inadvertently fail to provide the upmost attention to safety, they are likely to bring forth adverse safety events on our roadways, and for which the public and regulators will react not just toward that offending firm, such incidents will become an outcry and overarching barrier to any furtherance of AI self-driving cars.

Simply stated, for AI self-driving cars, the chances of a bad apple spoiling the barrel is quite high and something that all of us in this industry live on the edge of each day.

In speaking with Mark Rosekind, Chief Safety Innovation Officer at Zoox, doing so at a recent Autonomous Vehicle event in Silicon Valley, he emphasized how safety considerations are vital in the AI self-driving car arena. His years as an administrator for the National Highway Traffic Safety Administration (NHTSA) and his service on the board of the National Transportation Safety Board (NTSB) provide a quite on-target skillset and base of experience for his role. For those of you interested in the overall approach to safety that Zoox is pursuing, you can take a look at their posted report: https://zoox.com/safety/

Those of you that follow closely my postings will remember that I had previously mentioned the efforts of Chris Hart in the safety aspects of AI self-driving cars. As a former chairman of the NTSB, he brings key insights to what the auto makers and tech firms need to be doing about safety, along with offering important views that can help shape regulations and regulatory actions (see his web site:https://hartsolutionsllc.com/). You might find of interest his recent blog post about the differences between aviation automation and AI self-driving cars, which dovetails too into my remarks about the same topic.

For Chris Hart’s recent blog post, see: http://www.thedrive.com/tech/26896/self-driving-safety-steps-into-the-unknown

For my prior posting about AI self-driving car safety and Chris Hart’s remarks on the matter, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my posting about how airplane automation is not the same as what is needed for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/airplane-autopilot-systems-self-driving-car-ai/

Waymo, Google/Alphabet’s entity well-known for its prominence in the AI self-driving car industry, has also brought on-board a Chief Safety Officer, namely Debbie Hersman. Besides her having served on the NTSB and having been its chairman, she also was the CEO and President of the National Safety Council. It was with welcome relief that she has come on-board to Waymo since it also sends a signal or sign to the rest of the AI self-driving car makers that this is a crucial role and one they too need to make sure they are embracing if they aren’t already doing so.

Uber recently brought on-board Nat Beuse to head their safety efforts. He had been with the U.S. Department of Transportation and oversaw vehicle safety efforts there for many years. For those of you interested in the safety report that Uber produced last year, coming after their internal review of the Uber self-driving car incident, you can find the report posted here: https://www.uber.com/info/atg/safety/

I’d also like to mention the efforts of Alex Epstein, Director of Transportation at the National Safety Council (NSC). We met at an inaugural conference on the safety of AI self-driving cars and his insights and remarks were spot-on about where the industry is and where it needs to go. At the NSC he is leading their Advanced Automotive Safety Technology initiative. His efforts of public outreach are notable and the public campaign of MyCarDoesWhat is an example of how we need to aid the public in understanding the facets of car automation: https://mycardoeswhat.org/

Defining the Chief Safety Officer Role

I have found it useful to clarify what I mean by the role of a Chief Safety Officer in the context of a firm that has an AI-based product or service, particularly such as the AI self-driving car industry.

Take a look at my Figure 1.

As shown, the Chief Safety Officer has a number of important role elements. These elements all intertwine with each other and should not be construed as independent of each other. They are an integrated mesh of the space of safety elements needed to be fostered and led by the Chief Safety Officer. Allowing one of the elements to languish or be undervalued is likely to undermine the integrity of any safety related programs or approaches undertaken by a firm.

The nine core elements for a Chief Safety Officer consist of:

  •         Safety Strategy
  •         Safety Company Culture
  •         Safety Policies
  •         Safety Education
  •         Safety Awareness
  •         Safety External
  •         Safety SDLC
  •         Safety Reporting
  •         Safety Crisis Management

I’ll next describe each of the elements.

I’m going to focus on the AI self-driving car industry, but you can hopefully see how these can be applied to other areas of AI that involve safety-related AI-based products or services. Perhaps you make AI-based robots that will be working in warehouses or factories, etc., which these elements would then pertain to equally.

I am also going to omit the other kinds of non-AI safety matters that the Chief Safety Officer would likely encompass, which are well documented already in numerous online Chief Safety Officer descriptions and specifications.

Here’s a brief indication about each element.

  •         Safety Strategy

The Chief Safety Officer establishes the overall strategy of how safety will be incorporated into the AI systems and works hand-in-hand with the other top executives in doing so. This must be done collaboratively since the rest of the executive team must “buy into” the safety strategy and be willing and able to carry it out. Safety is not an island of itself. Each of the functions of the firm must have a stake in and will be required to ensure the safety strategy is being implemented.

  •         Safety Company Culture

The Chief Safety Officer needs to help shape the culture of the company to be on a safety-first mindset. Often times, AI developers and other tech personal are not versed in safety and might have come from a university setting wherein AI systems were done as prototypes, and safety was not a particular pressing topic. Some will even potentially believe that “safety is the enemy of innovation,” which is at times a rampant false belief. The company culture might require some heavy lifting and has to be done in conjunction with the top leadership team and done in a meaningful way rather than a light-hearted or surface-level manner.

  •         Safety Policies

The Chief Safety Officer should put together a set of safety policies indicating how the AI systems need to be conceived of, designed, built, tested, and fielded to embody key principles of safety. These policies need to be readily comprehensible and there needs to a clear-cut means to abide by the policies. If the policies are overly abstract or obtuse, or if they are impractical, it will likely foster a sense of “it’s just CYA” and the rest of the firm will tend to disregard the policies.

  •         Safety Education

The Chief Safety Officer should identify the kinds of educational means that can be made available throughout the firm to increase an understanding of what safety means in the context of developing and fielding AI systems. This can be a combination of internally prepared AI safety classes and externally provided ones. The top executives should also participate in the educational programs to showcase their belief in and support for the educational aspects, and they should work with the Chief Safety Officer in scheduling and ensuring that the teams and staff undertake the classes, along with follow-up to ascertain that the education is being put into active use.

  •         Safety Awareness

The Chief Safety Officer should undertake to have safety awareness become an ongoing activity, often fostered by posting AI safety related aspects on the corporate Intranet, along with providing other avenues in which AI safety is discussed and encouraged such as brown bag lunch sessions, sharing of AI safety tips and suggestions from within the firm, and so on. This needs to be an ongoing effort and not allow a one-time push of safety that then decays or becomes forgotten.

  •         Safety External

The Chief Safety Officer should be proactive in representing the company and its AI safety efforts to external stakeholders. This includes doing so with regulators, possibly participating in regulatory efforts or reviews when appropriate, along with speaking at industry events about the safety related work being undertaken and conferring with the media. As the external face of the company, the CSO will also likely get feedback from the external stakeholders, which then should be refed into the company and be especially discussed with the top leadership team.

  •         Safety SDLC

The Chief Safety Officer should help ensure that the Systems Development Life Cycle (SDLC) includes safety throughout each of the stages. This includes whether the SDLC is agile-oriented or waterfall or in whatever method or manner being undertaken. Checkpoints and reviews need to include the safety aspects and have teeth, meaning that if safety is either not being included or being shortchanged, this becomes an effort stopping criteria that cannot be swept under the rug. It is easy during the pressures of development to shove aside safety portions and coding, under the guise of “getting on with the real coding,” but that’s not going to cut it in AI systems involving life-or-death systems consequences.

  •         Safety Reporting

The Chief Safety Officer needs to put in place a means to keep track of safety aspects that are being considered and included into the AI systems. This is typically an online tracking and reporting system. Out of the tracking system, reporting needs to be made available on an ongoing basis. This includes dashboards and flash reporting, which is vital since if the reporting is overly delayed or difficult to obtain or interpret, it will be considered “too late to deal with” and the cost or effort to make safety related corrections or additions will be subordinated.

  •         Safety Crisis Management

The Chief Safety Officer should establish a crisis management approach to deal with any AI safety related faults or issues that arise. Firms often seem to scramble when their AI self-driving car has injured someone, yet this is something that could have been anticipated as a possibility, and preparations could have been made beforehand. The response to an AI safety adverse act needs to be carefully coordinated and the company will likely be seen as either doing sincere efforts about the incident or if ill-prepared might make matters untoward and undermine the company efforts and those of other AI self-driving car makers.

In the Figure 1, I’ve also included my framework of AI self-driving cars.

Each of the nine elements that I’ve just described can be applied to each of the aspects of the framework. For example, how is safety being included into the sensors design, development, testing, and fielding? How is safety being included into the sensor fusion design, development, testing, and fielding? How is safety being included into the virtual world model design, development, testing, and fielding.

You are unlikely to have many safety related considerations in say the sensors if there isn’t an overarching belief at the firm that safety is important, which is showcased by having a Chief Safety Officer, and by having a company culture that embraces safety, and by educating the teams that are doing the development about AI safety, etc. This highlights my earlier point that each of the elements must work as an integrative whole.

Suppose the firm actually does eight of the elements but doesn’t do anything about how to incorporate AI safety into the SDLC. What then?

This means that the AI developers are left to their own to try and devise how to incorporate safety into their development efforts. They might fumble around doing so, or take bona fide stabs at it, though it is fragmented and disconnected from the rest of the development methodology.

Furthermore, worse still, the odds are that the SDLC has no place particularly for safety, which means no metrics about safety, and therefore the pressure to not do anything related to safety is enhanced, due to the metrics measuring the AI developers in other ways that don’t necessarily have much to do with safety. The point being that each of the nine elements need to work collectively.

Resources on Baking AI Safety Into AI Self-Driving Car Efforts

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We consider AI safety aspects as essential to our efforts and urge auto makers and tech firms to do likewise.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Though I often tend to focus more so on the true Level 5 self-driving car, the safety aspects of the less than Level 5 are especially crucial right now. I’ve repeatedly cautioned that as the Level 3 advanced automation becomes more prevalent, which we’re just now witnessing coming into the marketplace, we are upping the dangers associated with the interfacing between AI systems and humans. This includes issues associated with cognitive disconnects of AI-humans and the human mindset dissonance, all of which can be disastrous from a safety perspective. Co-sharing and hand-offs of the driving task, done in real-time at freeway speeds, nearly points a stick in the eye of safety. Auto makers and tech firms must get ahead of the AI safety curve, rather than wait until the horse is already out of the barn and it becomes belated to act.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the safety topic, let’s consider some additional facets.

Take a look at Figure 2.

I’ve listed some of the publicly available documents that are a useful cornerstone to getting up-to-speed about AI self-driving car safety.

The U.S. Department of Transportation (DOT) NHTSA has provided two reports that I especially find helpful about the foundations of safety related to AI self-driving cars. Besides providing background context, these documents also indicate the regulatory considerations that any auto maker or tech firm will need to be incorporating into their efforts. Both of these reports have been promulgated under the auspices of DOT Secretary Elaine Chao.

The version 2.0 report is here: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf

The version 3.0 report is here: https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf

I had earlier mentioned the Uber safety report, which is here: https://www.uber.com/info/atg/safety/

I also had mentioned the Zoox safety report, which is here: https://zoox.com/safety/

You would also likely find of us the Waymo safety report, which is here: https://waymo.com/safety/

I’d also like to give a shout out to Dr. Philip Koopman, a professor at CMU that has done extensive AI safety related research, which you can find at his CMU web site or at this company web site: https://edge-case-research.com/

As a former university professor, I too used to do research while at my university and also did so via an outside company. It’s a great way to try and infuse the core foundational research that you typically do in a university setting with the more applied kind of efforts that you do while in industry. I found it a handy combination. Philip and I seem to also end-up at many of the same AI self-driving car conferences and do so as speaker, panelists, or interested participants.

Conclusion

For those Chief Safety Officers of AI self-driving car firms that I’ve not mentioned herein, you are welcome to let me know that you’d like to be included in future updates that I do on this topic. Plus, if you have safety reports akin to the ones I’ve listed, I welcome taking a look at those reports and will be glad to mention those too.

One concern being expressed about the AI self-driving car industry is whether the matter of safety is being undertaken in a secretive manner that tends to keep each other of the auto makers or tech firms in the dark about what the other firms are doing. When you look at the car industry, clearly it is apparent that the auto makers have traditionally competed on their safety records and used that to their advantage in trying to advertise and sell their wares.

Critics have voiced that if the AI self-driving car industry perceives itself to also be competing with each other on safety, naturally there would be a basis to purposely avoid sharing about safety aspects with each other. You can’t seemingly have it both ways, in that if you are competing on safety then it is presumed to be a zero-sum game, those that do better on safety will sell more than those that do not, and why help a competitor to get ahead.

This mindset needs to be overcome. As mentioned earlier, it won’t take much in terms of a few safety related bad outcomes to potentially stifle the entire AI self-driving car realm. If there is a public outcry, you can expect that this will push back at the auto makers and tech firms. The odds are that regulators would opt to come into the industry with a much heavier hand. Funding for AI self-driving car efforts might dry up. The engine driving the AI self-driving car pursuits could grind to a halt.

I’ve described the factors that cane aid or impede the field: https://www.aitrends.com/ai-insider/key-equation-for-predicting-year-to-prevalence-for-ai-self-driving-cars/

Existing disengagement reporting is weak and quite insufficient: https://www.aitrends.com/business-applications/disingenuous-disengagements-reporting-ai-self-driving-cars/

A few foul incidents will be perceived as a contagion, see my article: https://www.aitrends.com/selfdrivingcars/accidents-contagion-and-ai-self-driving-cars/

For my Top 10 predictions, see: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

There are efforts popping up to try and see if AI safety can become more widespread as an overt topic in the AI self-driving car industry. It’s tough though to overcome all of those NDA (Non-Disclosure Agreements) and concerns that proprietary matters might be disclosed. Regrettably, it might take a calamity to get enough heat to make things percolate more so, but I hope it doesn’t come down to that.

The adoption of Chief Safety Officers into the myriad of auto makers and tech firms that are pursuing AI self-driving cars is a healthy sign that safety is rising in importance. These positions have to be adopted seriously and with a realization at the firms that they cannot just put in place a role to somehow checkmark that they did so.

For Chief Safety Officers to do their job, they need to be at the top executive table and be considered part-and-parcel of the leadership team. I am also hoping that these Chief Safety Officers will bind together and become an across-the-industry “club” that can embrace a safety sharing mantra and use their positions and weight to get us further along on permeating safety throughout all aspects of AI self-driving cars.  Let’s make that into reality.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.