What Tesla Crashes Can Teach Us About Self-Driving Cars

569

The journey to a future dominated by autonomous vehicles hit a few speed bumps recently, with news of not one but two crashes, one them fatal, involving Teslas operating in autonomous driving mode. The collisions raise questions about how far self-driving cars really are from being safe enough for widespread adoption, and if the issues—human or computer—should be a roadblock toward that goal.

“The technology isn’t ready. Evolution of the new technology has to unfold over time, and it’s hard to say how long that will take,” says Michael Clamann, senior research scientist at Duke University’s Humans and Autonomy Lab (HAL).

“For semi-autonomous cars, we need reliable systems that keep the driver aware of what is going on around them and that can quickly and effectively return control in the event of an emergency. For fully autonomous cars we need sensors and algorithms that are effective enough to work in all conditions and account for all possible contingencies. For both, we need a regulatory environment that sets standards for everyone’s safety.”

Clamann says that the two crashes show us that Tesla’s sensors and collision algorithms aren’t quite perfected yet.

The sensors in the cars are something that other researchers have noted as an issue as well. John Dolan, a principal systems scientist in the Robotics Institute at Carnegie Mellon University and expert in autonomous driving says that in an ideal situation a vehicle would have a number of different sensors including a GPS with lane-level localization, and a laser sensor that won’t be blinded by sunshine.

Read the source article at Fast Company