Protecting against the cybersecurity risk of self-driving cars

618

Ten million self-driving cars will be on the road by 2020, according to an in-depth report by Business Insider Intelligence. Proponents of autonomous vehicles say that the technology has the potential to benefit society in a range of ways, from boosting economic productivity to reducing urban congestion. But others—including some potential consumers and corporate risk managers—have expressed serious concerns over the cybersecurity of the so-called fleet of the future. As one tech reporter put it: “Could cybercriminals remotely hijack an autonomous car’s electronics with the intent to cause a crash? Could terrorists commandeer the vehicles as weapons? Could data stored onboard be unlocked?”

We asked professor Engin Kirda —a systems, software, and network security expert who holds joint appointments in the College of Computer and Information Science and the College of Engineering—to assess the cybersecurity risk of self-driving cars, with a particular focus on how carmakers are working to keep autonomous vehicles safe from hackers.

Experts say that self-driving cars will be particularly susceptible to hackers. What makes them so vulnerable?

The answer to this question depends on what kind of a self-driving car we are talking about and how connected the car is to the outside world. If the car does any significant computations by connecting to the outside world via the cloud, needs some sort of internet-connectivity for its functionality, or completely relies on outside sensors for making all decisions, then yes, it might be susceptible to hackers.

In principle, any computerized system that has an interface to the outside world is potentially hackable. Any computer scientist knows that it is very difficult to create software without any bugs—especially when the software is very complex. Bugs may sometimes be security vulnerabilities, and may be exploitable. Hence, very complex systems such as self-driving cars might contain vulnerabilities that may be potentially exploited by hackers, or may rely on sensors for making decisions that may be tricked by hackers. For example, a road sign that looks like a stop sign to a human might be constructed to look like a different sign to the car. In fact, more and more research papers have been appearing lately that are demonstrating such tricks against machine learning systems.

Read the source article at Phys.org.