Feds Working on Ways To Protect AI Training Data from Malicious Tampering

Stacey Dixon, second from left, IARPA director, speaks at an Intelligence and National Security Alliance conference in Arlington, Virginia, on April 16, 2019.

The intelligence community’s advanced research agency has laid the groundwork for two programs focused on ways to overcome adversarial machine learning and prevent adversaries from using artificial intelligence tools against users.

Stacey Dixon, director of the Intelligence Advanced Research Projects Activity (IARPA), said the agency expects both programs to run for about two years.

“We appreciate the fact that AI is going to be in a lot more things in our life, and we’re going to be relying on it a lot more, so we would want to be able to take advantage of, or at least mitigate, those vulnerabilities that we know exist,” Dixon said on April 16 at an Intelligence and National Security Alliance (INSA) conference in Arlington, Virginia.

The first project, called Trojans in Artificial Intelligence (TrojAI), looks to sound the alarm whenever an adversary has compromised the training data for a machine-learning algorithm.

“They have inserted some training data that is saying that a stop sign is actually a speed limit sign, for example,” Dixon said. “How do you know that there are these kinds of triggers in your training data, as you take the algorithms that come out of the training and use them for something else?”

IARPA released a draft broad agency announcement last December and had received feedback, comments and suggested changes from the private sector through the end of February.

Another program, which Dixon said would have a draft announcement coming later this year, will look to protect the identities of people whose images have served as training data for facial recognition tools.

“How do you ensure that no one can take the algorithm that you created and go back and recreate the faces that were in the database?” Dixon said. “These are certain areas that we hadn’t seen too much research, and so we will be starting programs.”

While a handful of agencies have piloted simpler AI tools, like robotic process automation, Customs and Border Protection since June 2016 has been working on a biometric facial recognition pilot program that compares images of passengers boarding flights to photos on their passports, visas and other forms of government-issued identification.

In addition, Dixon said IARPA has made cybersecurity forecasting an “aspirational” goal, and described the project as giving agencies and companies a heads-up about an imminent cyber-attack, and the identify of who might be behind it.

Read the source article at Federal News Network.