The Pentagon’s Plans to Develop Trustworthy Artificial Intelligence

1532

As federal agencies ramp up efforts to advance artificial intelligence under the White House’s national AI strategy, the Pentagon’s research shop is already working to push the tech to new limits.

Last year, the Defense Advanced Research Projects Agency kicked off the AI Next campaign, a $2 billion effort to build artificial intelligence tools capable of human-like communication and logical reasoning that far surpass the abilities of today’s most advanced tech. Included in the agency’s portfolio are efforts to automate the scientific process, create computers with common sense, study the tech implications of insect brains and link military systems to the human body.

Through the AI Exploration program, the agency is also supplying rapid bursts of funding for a myriad of high-risk, high-reward efforts to develop new AI applications.

Nextgov sat down with Valerie Browning, director of DARPA’s Defense Sciences Office, to discuss the government’s AI research efforts, the shortcomings of today’s tech and the growing tension between the tech industry and the Pentagon.

(This conversation has been edited for length and clarity.)

Nextgov: So what’s the ultimate goal of AI Next?

Browning: The grand vision for the AI Next campaign is to take machines and move them from being tools—perhaps very valuable tools—but really to be trusted, collaborative partners. There’s a certain amount of competency and world knowledge that we expect a trusted partner to possess. There’s a certain ability to recognize new situations, behave appropriately in new situations, [and] recognize when maybe you don’t have enough experience or training to actually function in a predictable or appropriate way for new situations. Those are the big picture sorts of things that we’re really after. Machine learning-enabled AI does certain tasks quite well—image classification, voice recognition, natural language processing, statistical pattern recognition—but we also know AI can fail quite spectacularly in unexpected ways. We can’t always accurately predict how they’re going to fail.

Nextgov: What are the biggest gaps between the AI today and the AI that DARPA’s trying to build?

Browning: The fact that AI can fail in ways that humans wouldn’t. In image classification, a machine will see a picture of a panda and recognize it as a panda, but you just make a few minor changes to pixels that the human eye wouldn’t even recognize, and it’s classified as a gibbon or something. We need to be able to build AI systems that have that sort of common sense wired in. We need AI systems that do have some ability for introspection, so when given a task they could communicate to their partner ‘based on my training and my experience, you should have confidence in me that I could do this’ or ‘I’ve not encountered this situation before and I can’t … perform in the way you’d like me to in this situation.’ How can we train better faster without the laborious handwork of having to label really large datasets? Wouldn’t it be nice if we didn’t have to come up with the training data of the universe to have to put into AI systems?

Read the source article in Nextgov.