Vatican, DoD Weigh in on Ethical AI Principles in Same Week

3946
St. Peter’s Basilica in Vatican City, where the Pope last week issued ethical principles to guide AI developers. (GETTY IMAGES)

By AI Trends Staff

The Vatican and the Department of Defense both took stances on AI ethics last week.

The Department of Defense on Monday held a press conference to announce its principles of AI ethics to guide development of new systems. The Vatican on Friday received support from IBM and Microsoft for its guidance for developers of AI rooted in Catholic social teaching.

The Rome Call for AI Ethics was drafted by the Pontifical Academy for Life, an advisory body to Pope Francis. It outlines six principles to define the ethical use of AI, to ensure that AI is developed and used to serve and protect people and the environment. Microsoft and IBM announced support for the charter, reported WSJPro  on Feb. 28.

IBM Executive VP John Kelly and Microsoft President Brad Smith were scheduled to travel to the Vatican to sign the document. IBM and Microsoft provided feedback to the creators of the document as it was being developed.

There is precedent for the Vatican to put out position papers and calls for guidelines on the environment and the planet for example.  “But this is the first one that I’m aware of where they really put out a definite document or set of guidelines around a technology,” stated Kelly. “And also the first I’m aware of where they invited a big tech company—and follow-on tech companies—to sign on.”

Signing the document shows a sincerity and seriousness of purpose, said Smith of Microsoft. “Our signature affirms our commitment to develop and deploy artificial intelligence with a clear focus on ethical issues,” he stated.

The Rome Call for AI Ethics identifies six principles: transparency, in that AI systems need to be explainable; inclusion, so the needs of all people are considered, that all benefit from the technology, and those who design and deploy do so with caution; impartiality, so that developers of systems do so without bias, and build systems that safeguard fairness and human dignity; reliability, so the AI systems are dependable; and security and privacy, so that systems are safeguarded and respect privacy.

None of the principles suggested by the Vatican are new ideas, suggested an account in Vox with the headline, “The Pope’s Plan to Battle Evil AI.” They echo some of the nonbinding AI guidelines issued by the European Union last year and the Trump Administration in January.

Technology company leaders have been frequenting the Vatican in recent years. In addition to the Pontifical Academy for Life, the pope has hosted the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences, to address questions raised by robotics and AI. Attendees have included DeepMind CEO Demis Hassabis, Facebook computer scientist Yann LeCun, and LinkedIn founder Reid Hoffman.

The Vatican’s vision for AI so far mirrors what the tech giants are saying, suggested Vox, namely: “regulate our new technology, but don’t ban it outright.”

DoD Adopts Five Principles of Ethical Use of AI

Meanwhile in Washington, DC, at a press conference on Feb. 24, the US Department of Defense officially adopted five principles for the ethical use of AI, with a focus on ensuring the military can retain full control and understanding over how machines make decisions, according to an account in fedscoop.

“We believe the nation that successfully implements AI principles will lead in AI for many years,” stated Lt. Gen. Jack Shanahan, the director of the Joint AI Center.

Lt. Gen. Jack Shanahan, director, Joint AI Center, DoD

The final DoD principles map closely to recommendations submitted by the Defense Innovation Board to Secretary of Defense Mark Esper in October.

The five DoD principles for the ethical use of AI are: to be responsible, exercising appropriate levels of judgement; equitable, taking steps to minimize unintended bias; traceable, with capabilities developed and deployed to be transparent and able to be audited; reliable, with safety, security and effectiveness subject to testing; and governable, with AI capabilities designed to fulfill their intended functions and avoid unintended consequences, and the ability to deactivate deployed systems that demonstrate unintended behavior.

The DoD’s Joint AI Center will take the lead in deploying the ethical AI principals across the agency. “Ethics remain at the forefront of everything the department does with AI technology, and our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DoD,” stated CIO Dana Deasy, according to an account in MeriTalk.

Lt. Gen. Shanahan was quoted as saying that DoD will “design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences,” in an account in Space News. The general said the intelligence community is likely to embrace similar guidelines and discussions among agencies and international allies have been going on for months.

Read the source articles in WSJPro, Vox, fedscoop. MeriTalk and  Space News.