Catastrophic Cyber Hacking of Self-Driving Cars

2195

By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

In a few years, you’ll be enjoying a leisurely drive in your self-driving car. Without having to watch the road, you’ll be sipping your brandy as a passenger in your own car and will leave the bothersome chore of driving to AI. Not a care in the world. Well, except for the fact that your self-driving car might be susceptible to cyber hacking. Imagine if your car suddenly “decided” to veer off-course and took you into a blind alley where masked thugs were ready to drag you out of your vehicle and rob you (they not only directed the car to their location, they also forced it to unlock the doors and open them so they could more easily grab you). Or, suppose “just for fun” someone decided to convince your self-driving car to go straight off a cliff. None of these scenarios seems attractive, and yet they all are potentially possible. The key to preventing these calamities is to make sure that self-driving cars have topnotch airtight computer security.

I can’t say for sure that self-driving cars will indeed have tough-as-nails computer security. Right now, the security side of self-driving cars is barely getting much attention. In an effort to get self-driving cars to actually be viable, most of the self-driving car makers are putting the bulk of their attention into the core fundamentals of making the car drive. Concerns about cyber hacking are way down on the list of priorities. Meanwhile, we daily are made aware of new hacks that enterprising researchers and others are finding with existing human driven cars.

The irony of sorts is that the more sophisticated that self-driving cars become, the greater the chance that a hack can produce catastrophic results.

Why? Simply because the more that the automation can do to control the car, the more readily a hack or hacker can force the car to do something untoward. If you are driving a classic 1920’s Model T car, it is nearly impossible to hack it because there isn’t any automation on it to be hacked. On the other hand, a fully autonomous Level-5 self-driving car has the potential to do whatever bidding a hack or hackers want to convey, since the AI is in complete control of the operation of the car. A hack can take over the steering, the braking, the acceleration, and even the internal temperature and air conditioning, the radio of the car, the door locks, and anything else that is connected into the controls of the vehicle.

I am guessing that you are wondering how a hacker or a hack could subvert the control of your self-driving car. When I refer to a hack, I am indicating that a malicious program or application has gotten into the controls of your self-driving car, while when I refer to a hacker it means that a human has been able to maliciously take over the control of your car. The human hacker might be standing on the sidewalk as your car goes past and they have a brief moment to access your car (based on a limited range of trying to electronically communicate with your car such as via Bluetooth), or maybe residing in the car next to you on the freeway. Or, the hacker could be hundreds of miles away and they are using the Internet to gain access into your self-driving car.

There are numerous ways to try and usurp the control of your self-driving car. These are the most promising methods: (a) Remote access via the Internet, (b) Remote access locally such as via Bluetooth, (c) Fooling the sensory devices of your self-driving car, (d) Planting a specialized physical device into your self-driving car, (e) Attaching a specialized physical device onto the exterior of your self-driving car, (f) Inserting a backdoor into the self-driving car via the maker of the car. Let’s take a look at each of these methods.

I’ll start with a recent news story that involved the placement of a physical device into a car, doing so by connecting to the On-Board Diagnostics (OBD) of the car.  This was done on a relatively conventional modern car, and offers a real-world example of what can potentially be done to a self-driving car. The case involved a computer security firm that wanted to see if they could take control of a moving car and somehow subvert the car.

As background about today’s cars and their technology, we all know that on our dashboards there are so-called “idiot lights” that illuminate to tell us when our gas tank is nearing empty or when the oil is getting low. You might have also heard a TV or radio ad placed by a car mechanic or car repair service that says they can ascertain the error conditions of your car by bringing it into their shop, wherein they can then connect to your car to read the diagnostic codes.  Turns out that since 1996, all cars and light trucks sold in the United States must have an under-the-dash portal that allows for the reading of diagnostic codes. A car mechanic or repair shop can plug into that portal and see what error codes the car has experienced. This is handy for doing car repairs.

There are standards for these diagnostic codes. The Diagnostic Trouble Codes (DTC) standard dictates that the error code begins with a letter, namely P for Powertrain, B for Body, C for Chassis, and U for network, followed by a four-digit numeric code. You can easily look-up the code in a chart and then know what errors the car has experienced. Your dashboard pretty much works the same way. It reads the codes and then illuminates a particular icon such as gas getting low icon or a brake pads are worn icon. In some case, the car maker opted to just show a generic indicator such as “car needs service” rather than trying to display the specifics of the numerous possible codes.

There are many companies that now provide a device that you can purchase as a consumer and connect to the OBD portal. These devices, referred to as dongles, connect to the latest version of the OBD, known as OBD2 or OBD-II.  Once you’ve connected the device to your under-the-dash ODB2 portal, the device will retrieve the error codes from your car, storing the codes similar to a USB memory stick would do, and you can then remove the dongle and plug it into your laptop USB port, allowing you to see a readout of the diagnostic codes. More costly dongles have an LED display that shows the error codes directly, thus bypassing the need to remove it and place it into your laptop.

Even more advanced dongles will allow you to communicate to the dongle via your smartphone. Using Bluetooth, the dongle will allow you to connect your smartphone to the dongle. You download an app provided by the company that provided the dongle. The app communicates with the dongle and tells you what it finds out from your car.  So far, this is all innocent enough and certainly seems like a handy boon for those that want to know what their car knows.

Here’s what the computer security firm recently did. The smartphone app communicates with the dongle and tries to make a secure connection so that no one else can intervene. Using a brute force technique, the computer security firm found the secret PIN and was able to connect to the dongle, via Bluetooth, and masquerade as though they were the person that had the proper smartphone app that was supposed to be able to communicate with the dongle.

Your first thought might be that it really doesn’t seem like much of a hack since all that they can do is read the error codes of the car. Big deal, you say. Unfortunately, there is something about the OBD portal that you need to know. Not only can the ODB portal obtain info from the automation of the car, but it can also convey information into the automation of the car, including the potential to reprogram aspects of the car.  Yikes!  That’s right, built into every car since 1996 as sold in the United States, there is a handy little way to sneak into the automation of your car.

This is known as “security breach through obscurity” meaning that most people have no idea that the OBD is a two-way street, so to speak, meaning that it can read and it also can write into the automation of the car.  Only those within the car industry usually know that this is possible. Of course, any determined car hacker readily knows about this. Usually, there isn’t an easy way to get direct access to the OBD portal in your car, since the hacker would need to break into your car to try and reach under-the-dash and connect to the portal.  Voila, you have made it easy by connecting the dongle and making it available via remote Bluetooth. Your actions have handed the control of your car over to someone maliciously wanting to take over your car and do so from outside of your car.

In the case of the computer security researchers, they were able to inject malicious messages into the car.  They had a human start the car and drive the car for a distance, and then suddenly told the car via their own smartphone app and into the dongle and through the OBD that the car engine should shut down. The car happily obliged. Imagine if you had been in the car. The car was zooming along and all of a sudden for no apparent reason the engine stops. This could have led to a car accident and possible deaths. For the computer security research firm, they did this as an exercise to show what is possible, and no one was actually harmed in the act of proving that this was possible.  The company that makes the dongle, Bosch Drivelog Connector, quickly implemented a fix, and pointed out that the hacker would have needed to be within Bluetooth range to exploit this hole.

You might also think that you can avoid this kind of catastrophe by simply not installing a dongle onto your car’s OBD. Let’s move forward in time and think about this. Suppose you have a self-driving car. You might decide to let others use your self-driving car when you don’t need to use it, acting kind of like your own version of Uber and trying to pick-up some extra dough by essentially renting out your car. The person using your car could put that dongle onto the OBD. Some say that you can just put tape over the portal and thus stop someone from using this exploit, or maybe putting some other locking mechanism there.  Yes, these are possibilities, each with their own vulnerabilities, and we’ll be seeing more about this once self-driving cars come to fruition.

Currently, some insurance companies offer incentives to human drivers to plug a dongle into their OBD. A car insurance company might offer discounted rates to human drivers that always stay within the speed limit and that don’t do any harsh braking.  People are willing to provide this info to the insurance companies in order to get a break on their car insurance premiums.  Companies that have a fleet of cars or trucks also use these dongles, doing so to catch their drivers when they drive erratically, or sometimes do so to detect whether their drivers are taking side trips rather than driving directly to their destinations. The point is that the OBD and the dongles are here and now, and unlikely to be stricken from modern cars.  We are going to have them on self-driving cars, for sure.

Modern cars have a Controller Area Network (CAN) which is a small network within the car, allowing the various electronic devices to communicate with each other. There are Engine Control Units (ECU’s) used for the various components of the car, such as for steering, for the braking, for the accelerator, for the engine, and so on.  The ECU’s communicate via the CAN. Via the ODB, you can get into the middle of the messages going back-and-forth on this CAN network. Think of it like your WiFi at home, and suppose that someone else jumped onto your WiFi.  They could read the messages of your home mobile devices and laptops.  They could also take control of your home printer, and your home lights or other Internet of Things devices that are connected into your WiFi.

As mentioned, putting something inside your car to take control is just one of many ways to maliciously subvert the automation of your car. Another method involves fooling the sensors on your car.

In a famous example demonstrated in 2016, researchers were able to fool the Tesla autopilot sensors by using off-the-shelf emitting devices that sent either visual images, sounds, or radio waves to a Tesla car. The Tesla could be drenched in sensory overload that would prevent the self-driving features of the car from being able to discern what is going on. This is a jammer. Or, it could make the sensors believe an object was in front of the car, such as another car, when there wasn’t another car there at all. This is a ghost maker. Admittedly, all of these tests were done in a very constrained environment without the car actually moving along the road, and so one can criticize the tests as being overly academic. Nonetheless, it shows the kind of potential that a malicious hacker could try.

A few years ago, there was the case of security researchers that remotely took control of a Jeep Cherokee while it was on-the-road. They did this via an Internet connection into the car. They were able to remotely turn the steering wheel for the Jeep Cherokee as though it was trying to park the car, even though it was zooming ahead at 80 miles per hour. In another test with a different brand of car, they were able to convince a Toyota Prius’s collision avoidance system to suddenly apply its brakes, causing it to come to an undesired rapid stop. In each case, they were able to exploit the automation of the car. The more the automation can do, the more they could take over control of the car.  Remember that self-driving cars will be chock full of automation and everything on the car will be controlled by automation.

Some worry that the increasing use of advanced entertainment systems in cars is opening an additional can of worms too. The more that your car can do with the Internet, the more chances that a malicious hacker can get electronically into your car. Consumers are clamoring that they want their cars to have WiFi. Consumers want their cars to allow them to cruise the Internet, while cruising on the open highway. Cars are becoming viable targets for Internet attacks, doing so at the urging in a sense of consumers that want their cars to be Internet enabled.

Should we become luddites and insist that no more automation should be allowed into our cars? Should we refuse to ride in self-driving cars? I don’t think these are especially viable options. Automation is coming. Self-driving cars are coming. The tide is rising and nothing is going to stop it. That being said, the moment that we being to see real-world instances of self-driving cars that are taken over by hackers, you can bet that’s when there will be a hue and cry about cyber security for our cars.

To-date, we’ve not had any big moments of cars getting hacked and something terrible occurring. It is like earthquakes. Until a massive earthquake happens, we are not thinking about earthquake preparedness. I say that we need to be thinking more seriously about computer security for our cars, now.  Especially for self-driving cars as they will be the most vulnerable to allowing malicious control to wreak havoc.  We need to yell loudly and implore the self-driving car makers to elevate the importance of computer security.

We also need the AI of the self-driving cars to realize when something malicious is taking place. The AI can be watching over the car and trying to not only control the car, but also trying to detect when something is amiss. The AI though is also a two-way street, since we will soon have hackers that try to trick the AI into doing something malicious. It’s going to be a cat-and-mouse game.  And involve life-and-death consequences.  Block the hackers. Sell the self-driving cars.

This content is original to AI Trends.