Current Generation of Self-Driving Cars AI Needs a Safety Certification Process


The current generation of AI self-driving cars does not have a certification process akin to standards in the aviation industry, Francis Govers of Bell Helicopter told an audience at AI World 2018 in Boston, in a talk entitled “Safety and AI: Certification and Testing.”

He described the AI within a self-driving car as a “critical safety system” that requires a regime of certification and testing to ensure the same confidence in safety that the public has with the airline industry.

“We are seeing an explosion in advanced in AI and resulting applications,” he said. For self-driving cars, these include object recognition, lane detection, learned engine management, learned braking, social interaction/polite driving, sleepy driver detection, adaptive cruise control and fault detection.

“A Tesla has many sensors,” he said, then asked, “Which ones are safety critical?”

New Level of Challenge for AI Developers

Self-driving cars pose a new level of challenge for AI developers. “We want the AI to understand the intent of the operator. The self-driving car needs a goal-oriented strategy that adapts to real-world conditions in real time.”

Currently, critical software must be a deterministic system, meaning it always gives you the same output based on the same input, and it takes exactly the same amount of time to come to its decision. The difficulty is, “AI systems do not work like that. An AI system will never be 100% correct; it will make mistakes.”

The airline industry has five levels of safety. Level A is Catastrophic Failure, which is one failure in a billion operations. Levels B through E are less stringent and less severe.

Alternative strategies include having special software serve as a “watchdog” over the AI, which takes over when certain conditions are detected, such as the car going too fast. “The trick is to come up with the boundary condition, and make a decision before something bad happens,” Govers said.

Another strategy is to engage in performance prediction, which takes in account the ability to precisely predict future AI software performance, which allows the developers to create strategies to mitigate risk in advance.  Predictable performance is crucial for the development of standards for AI use in safety-critical applications.

Govers has been working with official bodies to develop methods to certify the safety of the AI systems in self-driving vehicles. “We want to be designing for the safety of the autonomous system,” he said.

Author of “Artificial Intelligence for Robots”

He is the author of the recently-published “Artificial Intelligence for Robotics” from Packt Publishing and available on Amazon. The book addresses machine learning techniques applied to ground mobile robots. The AI section begins with convolutional neural networks for object recognition, then extends to reinforcement learning and genetic algorithms.

An unmanned systems engineer for Bell Helicopter, Govers works on vehicle management systems for the Bell V-247 Unmanned Tiltrotor project. These systems include: autonomous takeoff; landing (precision, off field, on moving ships); autonomous aerial refueling; mission planning;  detect-and-remain-clear (see and avoid); GPS-denied navigation; and, communications and data links.

Govers is the author of over 45 articles and two books and has 13 patents pending,

His years of involvement in the space program began in 1978 as a Space Communications Specialist in the Air Force. He later worked in Command and Control for the Space Station for McDonnell Douglas. He was the founder of the Advanced Simulation Laboratory for TASC Inc., and has been program manager on a number of large research projects.

He is a former Search and Rescue pilot with the Civil Air Patrol, an amateur astronomer, describes himself as a bad artist, a musician, and is an avid sailor.

For more information, go to Bell Flight.