Exploiting Transducers To Break Into AI Systems: Security Issues For Autonomous Cars

1575
(CREDIT: GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

When I was an undergraduate majoring in computer science and electrical engineering, I used to spend a lot of my time in the computer center working on my systems projects. We had a mid-range computer system that was quite powerful for the time period and I often operated the system in addition to writing programs on it.

One day, I had my radio with me and was turning the radio channels when I noticed a pattern to the static on one of the otherwise unused channels. Listening more closely, I could definitely tell that it was not just pure random noise and that it was a pattern of some kind.

Was it finally a sign from the skies that outer space aliens were trying to communicate to us from far away planets?

No, turns out it wasn’t proof of aliens from outer space.

Instead, it was picking up the electromagnetic waves being emitted by the mid-range computer system.

I began to pay close attention to what the computer was doing and what I could hear on the radio. Though perhaps I should not admit this, I spent so much time there doing my projects that it seemed like I practically lived there (well, I did keep a sleeping bag there, for those late-night deadline crunches to get my projects done on time). Over time, I enjoyed being able to ascertain what the mid-range computer was doing via just listening to the beeps and dots of sound coming from the radio.

I would tell my friends that the computer was about to print something, and lo and behold seconds later the printer started. I would say that the computer is rebooting and it’s in the stage where it is loading up the core part of the operating system. Pretty much, I could after a while tell you relatively precisely what the computer was doing at any moment in time, simply by listening to the static radio channel. For those that didn’t know the source of my magic, they could hear the radio but it seemed to them that somebody had accidentally left it on a channel that wasn’t playing music and so they had no clue that I was secretly using it as my spy or co-conspirator, you’d say.

It then dawned on me that I could potentially get the computer to whistle a tune (so to speak), by writing a program that would use the memory and processor of the computer in such a fashion that it would produce certain patterns and tones on the radio channel.

Sure enough, after using (or wasting) a sunny weekend that I could have been at the beach, I proudly installed my program that would take as input any simple tune and would then get the radio to play it via the indirect means of the computer doing all sorts of memory shifting and processor calculations. Cool!

The sensors in the radio consisted of transducers, which officially is defined by the American National Standards Institute (ANSI) as a device that provides a usable output in response to a measurand.

For many years, transduction was considered the conversion of a physical measurand into mechanical energy, such as operating a kinematic control.

Once solid-state electronics came along, most of today’s transducers or sensors serve to transduce physical phenomena into electrical output.

About The Nature Of Transducers

To provide some clarity, let’s define a sensor element or transducer element as a transduction mechanism that will convert one form of energy into another form, while the actual sensor or transducer itself consists of its physical packaging and its external connections.

A sensor system consists of various sensors and transducers that are made-up of sensor elements and transducer elements, and ultimately serves some stated purpose. A digital camera for example is a type of sensor system, in a packaging that might include a lens and a housing, and this sensor system consists of various sensors and transducers that capture light and then translate those physical phenomena into electrical signals, and those signals become digital bits (we might assign the values zero and one to the bits).

For any kind of sensor or transducer system, we would want to consider what accuracy levels it provides, how it deals with noise, what its operating range is, the amount of distortion it produces, and so on. A passive sensor or transducer system is one that receives energy and self-generates outputs from the input it collects. An active sensor or transducer system, such as a radar unit, a LIDAR unit, an ultrasonic unit, emits energy to then get back energy that it uses to modulate or produce outputs.

When you use a digital camera, you are likely vaguely aware that it has certain operating parameters such as the resolution of the image and whether it can take good pictures in low lighting. Underneath the hood, there is a lot going on in terms of the nature of the sensing elements, the amplification that is occurring when taking a picture, the analog filtering, the data conversion, etc. Generally, most of the time we don’t really concern ourselves with what’s under-the-hood. It’s similar to driving a car, we just get into the car, turn the key, and drive. No need to worry about the pistons and the crankshaft and the myriad of other gears and gadgets that compose the engine. We just put our foot on the gas and go.

For modern day cars, we are increasingly adding complex sensor and transducer systems into the cars. We want our cars to be able to detect if there is a pedestrian standing next to the car and alert us so that when we make a turn we don’t accidently hit the person. We want a back-up camera that we can see what’s behind us as we put a car into reverse and back-up. More and more, our cars are becoming miracles of state-of-the-art sensors and transducers, being able to sense the world around us and then provide that information to us or otherwise alert us to something we should be considering.

Autonomous Cars And Transducer Vulnerabilities

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic Self-Driving Car Institute, we are analyzing the vulnerabilities of the sensors and transducers that AI self-driving cars are being outfitted with. We want to figure out how these systems can be tricked or fooled, either by intent or by happenstance, and find ways to prevent or mitigate those vulnerabilities.

You might be at first puzzled about the potential vulnerabilities.

Let’s take an easy one that used to be quite popular.

Cars for a long time used a physical key in the door and in the ignition, and then began to switch to using keyless entry systems. For those of you that remember when we first migrated over to keyless entry systems, there were some nefarious attempts to electronically fool a keyless entry system. An intruder would sit in the parking lot and wait for you to park your car. When you get out of your car, you would naturally use your keyless fob to lock the door of the car. The intruder would capture the radiated signal, and then wait for you to go into the grocery store. Once you were out of sight, the intruder would then emit that same signal to your keyless entry system and fool it into opening the door, and ultimately also fool the ignition too.

Various encryption techniques and token exchanges are used to defeat this kind of heinous act.

Determined thieves can still potentially used a man-in-the-middle (MITM) attack against keyless entry systems, but it’s pretty hard to do and not something that you’d see done day-to-day in just any neighborhood. The notion of exploiting the sensory or transduction system is referred to by many as a transduction attack.

A transduction attack leverages the physics of a transducer or sensor and tries to exploit its input or its output to the advantage of the attacker.

Famous Case Of The DolphinAttack

One of the most impressive general examples of this ploy was the DolphinAttack approach identified and used by researchers at Zhejiang University. They were interested in seeing whether they could trick a voice recognition system, especially the popular ones such as Alexa, Siri, Google Now, Cortana, and others. Part of the goal of such attacks is to not have to actually gain direct access to the sensory or transducer system per se, in other words, you don’t need to physically get it and somehow open it up. Instead, you use whatever method it already uses for input, and try to feed input into it in such a manner that you can trick it in some manner.

If this wasn’t a potentially dastardly thing to do, it certainly is an admirable trick. Let me emphasize that it’s better to have researchers get there first, and figure out these kinds of vulnerabilities, versus waiting for the bad guys to figure out these exploits. Putting our heads in the sand and pretending that these exploits don’t exist or cannot be found is not a prudent approach to security. We would want to alert the manufacturers and designers of these sensory systems to be aware of how to improve their designs and limit or eliminate the vulnerabilities.

Back to the DolphinAttack and what the researchers did.

They wanted to provide inaudible commands to the voice recognition systems, such that humans would not know that fake or unauthorized commands were being fed into the voice recognition systems. It’s like using a dog whistle that only a dog can hear and that humans cannot hear. The sensors and transducers of the voice recognition systems are allowing a wide range of audible sounds to be fed into the microphone (beyond the range that humans can hear), and so you can sneak an inaudible sound into that microphone. A human might say, “Alexa, tell me a joke,” and meanwhile you’ve fed at an inaudible range the command “Alexa, squeak like a duck,” which the human didn’t hear the command and would be surprised that all of a sudden Alexa started quacking.

The upper bound of human hearing is at about 20 kHz, while the voice recognition systems are generally allowing for a range that includes 44 kHz. Keep in mind that the microphone is a transducer that converts airborne acoustical waves into electrical signals. This is similar to earlier when I discussed how a digital camera takes in light waves and then converts this into electrical signals and ultimately bits and bytes of data. The voice recognition systems consist of the hardware and software that first captures sounds, then converts the sounds into bits, and feeds those bits into the speech recognition component, which then feeds this into the command interpretation and execution portion.

The researchers created transmitters to try out their approach. In one case, they used an everyday smartphone as the signal source and the vector signal generator. This showcases that you don’t necessarily need some highly specialized and bulky equipment to pull of this attack. It can be carried out via an ordinary smartphone, which is relatively small and unobtrusive. If you took out a smartphone that had been rigged for this attack, nobody would be the wiser.

They wanted to try so-called walk-by attacks, whereby if you could get close enough to the voice recognition system, you could try to feed it the inaudible commands. Types of commands they used for the experiment included: “Call 1234567890,” “FaceTime 1234567890,” “Open dolphinattack.com,” “Open the back door,” and other commands. These are commands that would produce untoward actions that the person owning the voice recognition system would likely not want to happen. For example, by using the command “Open dolphinattack.com” you could get the device to potentially execute a more involved attack and thus the inaudible command got you initially inside to then take even worse action. The devices attacked included iPhones, iPads, MacBooks, Windows PC’s, Amazon Echo, etc.

Generally, these attacks succeeded.

There were some complications about the background noise and whether it might impact the attack, and other factors, but overall these attacks were able to achieve their demonstration that such attacks are feasible. They were able to get the various voice recognition systems to visit a potentially malicious web site, they got the devices to spy on the owner of the device, and there are other impacts that could be achieved such as Denial of Service (DoS), injecting fake information, and the like.

In the mix of devices, they included the Audi Q3, which has a voice recognition system for operating the navigation of the car. Indeed, most of the current crop of new cars have voice recognition systems now included into their respective cars. For AI self-driving cars, the expectation is that the AI will conversationally interact with the human occupants and determine where to drive, how to drive there, and so on. Imagine the concern if an interloper or intruder can trick those voice recognition systems into doing inaudible commands, and the dangers that could arise because of it.

Dealing With Transducer Attacks Aimed At Self-Driving Cars

Others have shown that transducer attacks can happen on self-driving cars in other ways.

For example, an experiment showed that it was possible to spoof Tesla’s ultrasonic sensors and transducers into either incorrectly gauging the distance to an object or potentially not even realizing that an object was within the range of the sensor. Now, admittedly, most of these experiments have been relatively rigged and tend to require a rather artificially created situation to show that it can be done, but the point is that we all need to be aware of the dangers of these kinds of transducer attacks.

What can be done about these transducer attacks?

First, it is incumbent upon the makers of the AI self-driving cars that they carefully assess what sensory devices and transducer attacks can occur for their self-driving cars.

Some of the automakers and tech firms are just grabbing a particular sensory device and putting it into their self-driving cars, doing so for convenience sake, or due to low cost, or other aspects, and not with an eye towards the vulnerabilities of the device. Many of them aren’t even looking at the vulnerabilities because they are too busy just trying to make the sensors work with their AI and ensure that the self-driving car can do the everyday needed actions of driving the car.

Second, the makers of the sensory devices need to be on their guard about how their devices might have vulnerabilities.

That being said, some of the device makers will say that it’s up the auto maker or tech firm to ascertain in what way the device will be configured into their self-driving cars. In other words, the maker of the sensor waves their hands and says that it is up to the automaker to be wary. All the sensor maker does is make the sensor, and how it’s used and how its protected is not on their shoulders, they often say. This kind of argument is not likely to hold much water when the day comes that the particular sensor allowed a really terrible attack and at that point there will be a slew of finger pointing and a price to be paid, you can bet.

Third, we need to continue to have the so-called good guys “white hats” try to find these vulnerabilities, doing so before the bad guys “black hats” do so.

As mentioned earlier, some say that when these vulnerabilities are discovered, the discoverer should keep a lid on it. I think we would likely agree that at least the discoverer ought to inform the sensor maker and the automaker. Beyond that, I realize that you might be queasy that by announcing it to a wider audience that then the bad guys can exploit it. There is an ongoing debate about how to best make known security flaws. Either way, I’d advocate that at least we should be trying to find the flaws and not be pretending they don’t exist.

Conclusion

For some of these transduction attacks, there will be those that beforehand try to figure out the attack and determine when and where to use the attack. In other cases, the transduction attacks might be of an opportunistic nature.

This is like walking through a neighborhood and trying each front door to see if any happen to be unlocked. The crook might get “lucky” and randomly find one that is unlocked, and then exploit the situation at that moment.

Notice that the transduction attacks are a form of cyberphysical security attacks. It does not require loading any special software into the device. It does not require physically touching the device. Instead, it leverages how the device itself works, and exploits its own design. By improving the designs, we can hopefully remove the holes and therefore prevent entirely the chances of transduction attacks.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.