AI Image Recognition is Not Perfect, Opens Security Concerns

911

OUR MACHINES ARE littered with security holes, because programmers are human. Humans make mistakes. In building the software that drives these computing systems, they allow code to run in the wrong place. They let the wrong data into the right place. They let in too much data. All this opens doors through which hackers can attack, and they do.

But even when artificial intelligence supplants those human programmers, risks remain. AI makes mistakes, too. As described in a new paper from researchers at Google and OpenAI, the artificial intelligence startup recently bootstrapped by Tesla founder Elon Musk, these risks are apparent in the new breed of AI that is rapidly reinventing our computing systems, and they could be particularly problematic as AI moves into security cameras, sensors, and other devices spread across the physical world. “This is really something that everyone should be thinking about,” says OpenAI researcher and ex-Googler Ian Goodfellow, who wrote the paper alongside current Google researchers Alexey Kurakin and Samy Bengio.

Seeing What Isn’t There

With the rise of deep neural networks—a form of AI that can learn discrete tasks by analyzing vast amounts of data—we’re moving toward a new dynamic where we don’t so much program our computing services as train them. Inside Internet giants like Facebook and Google and Microsoft, this is already starting to happen. Feeding them millions upon millions of photos, Mark Zuckerberg and company are training neural networks to recognize faces on the world’s most popular social network. Using vast collections of spoken words, Google is training neural nets to identify commands spoken into Android phones. And in the future, this is how we’ll build our intelligent robots and our self-driving cars.

Today, neural nets are quite good at recognizing faces and spoken words—not to mention objects, animals, letters, and words. But they do make mistakes—sometimes egregious mistakes. “No machine learning system is perfect,” says Kurakin. And in some cases, you can actually fool these systems into seeing or hearing things that aren’t really there.

Read the source article at wired.com