AI in Government: Ethical Considerations and Educational Needs

1604
Speakers at the recent AI World Government conference in Washington, DC explored a range of compelling topics at the intersection of AI, government and business.

By Deborah Borfitz, Senior Science Writer, AI Trends

In the public sector, adoption of artificial intelligence (AI) appears to have reached a tipping point with nearly a quarter of government agencies now having some sort of AI system in production—and making AI a digital transformation priority, according to research conducted by International Data Corporation (IDC).

In the U.S., a chatbot named George Washington has already taken over routine tasks in the NASA Shared Services Center and the Truman bot is on duty at the General Services Administration to help new vendors work through the agency’s detailed review process, according to Adelaide O’Brien, research director, Government Insights at IDC, speaking at the recent AI World Government conference in Washington, D.C.

The Bureau of Labor Statistics is using AI to reduce tedious manual labor associated with processing survey results, says conference speaker Dan Chenok, executive director of the IBM Center for The Business of Government. And one county in Kansas is using AI to augment decision-making about how to deliver services to inmates to reduce recidivism.

If Phil Komarny, vice president for innovation at Salesforce, has his way, students across 14 campuses at the University of Texas will soon be able to take ownership of their academic record with a platform that combines AI with blockchain technology. He is a staunch proponent of the “lead from behind” approach to AI adoption.

The federal government intends to provide more of its data to the American public for personal and commercial use, O’Brien points out, as signaled by the newly enacted OPEN Government Data Act requiring information be in a machine-readable format. 

But AI in the U.S. still evokes a lot of generalized fear because people don’t understand it and the ethical framework has yet to take shape. In the absence of education, the dystopian view served up by books such as The Big Nine and The Age of Surveillance Capitalism tends to prevail, says Lord Tim Clement-Jones, former chair of the UK’s House of Lords Select Committee for Artificial Intelligence and Chair of Council at Queen Mary University of London. The European Union is “off to a good start” with the General Data Protection Regulation (GDPR), he notes.

The consensus of panelists participating in AI World Government’s AI Governance, Big Data & Ethics Summit is that the U.S. lags behind even China and Russia on the AI front. But the communist countries plan to use AI in ways the U.S. likely never would, says Thomas Patterson, Bradlee Professor of Government and the Press at Harvard University.

Patterson’s vision for the future includes a social value recognition system that government would have no role in or access to. “We don’t want China’s social credit system or a surveillance system that decides who gets high-speed internet or gets on a plane,” Patterson says.

Risks and Unknowns

The promise of AI to improve human health and quality of life comes with risks—including new ways to undermine governments and pit organizations against one another, says Thomas Creely, director of the Ethics and Emerging Military Technology Graduate Program at the U.S. Naval War College. That adds a sense of urgency to correcting the deficit of ethics education in the U.S. 

Big data is too big without AI, says Anthony Scriffignano, senior vice president and chief data scientist at Dun & Bradstreet. “We’re looking for needles in a stack of needles. It’s getting geometrically harder day to day.” 

The risk of becoming a surveillance state is also real, adds his co-presenter David Bray, executive director of the People-Centered Coalition and senior fellow of the Institute for Human-Machine Cognition. The number of network devices will soon number nearly 80 billion, roughly 10 times the human population, he says. 

Presently, it’s a one-way conversation, says Scriffignano, noting “you can’t talk back to the internet.” In fact, only 4% of the net is even searchable, and search engines like Google and Yahoo are deciding what people should care about. Terms like artificial intelligence and privacy are also poorly defined, he adds.

The U.S. needs a strategy for AI and data, says Bray, voicing concern about the “virtue signaling and posturing” that defines the space. No one wants to be a first mover, particularly in rural America where many people didn’t benefit from the last industrial revolution, but “in the private sector you’d go broke behaving this way.”

Meanwhile, AI decision-making continues to grow in opaqueness and machine learning is replicating biases, according to Marc Rotenberg, president and executive director of the Electronic Privacy Information Center. After Google acquired YouTube in 2006, and switched to a proprietary ranking algorithm, EPIC’s top-rated privacy videos mysteriously fell off the top-10 list, he says. EPIC’s national campaign to advance algorithmic transparency has slogans to match its objectives: End Secret Profiling, Open the Code, Stop Discrimination by Computer, and Bayesian Determinations are Not Justice.

A secret algorithm assigning personally identifiable numeric scores to young tennis players is now the subject of a complaint EPIC filed with the Federal Trade Commission, claiming it impacts opportunities for scholarship, education, and employment, says Rotenberg. Part of its argument is that the ratings system could in the future provide the basis for government rating of citizens. 

Replicating an outcome remains problematic, even as numerous states have begun experimenting with AI tools to predict the risk of recidivism for criminal defendants and to consider that assessment at sentencing, says Rotenberg. The fairness of these point systems is also under FTC scrutiny.

Matters of Debate

The views of Al experts about how to move forward are not entirely united. Clement-Jones is adamant that biotech should be the model for AI because it did a good job building public trust. Michael R. Nelson, former professor of Internet studies at Georgetown University, reflected positively on the dawn of the internet age when government and businesses worked together to launch pilot projects and had a consistent story to tell. Chenok prefers allowing the market to work—”what is 98% right with the internet”—along with industry collaboration to work through the issues and learn over time.

Clement-Jones also believes the term “ethics” helps keep the private sector focused on the right principles and duties, including diversity. Nelson likes the idea of talking instead about “human rights,” which would apply more broadly. Chenok was again the centrist, favoring “ethical principles that are user-centered.” 

Whether or not the public sector should be leading AI education and skills development was also a matter of debate. Panelist Bob Gourley, co-founder and chief technology officer for startup OODA LLC, says government’s role should be limited to setting AI standards and laws. Clement-Jones, on the other hand, wants to see government at the helm and the focus be on developing creativity across a diversity of people.

His views were more closely aligned with that of former Massachusetts governor and presidential candidate Michael Dukakis, now chairman of The Michael Dukakis Institute for Leadership and Innovation. The U.S. needs to play a major and constructive role in bringing the international community together and out of the Wild West era, he says, noting that the U.S. recently succeeded in hacking the Russian electric grid.

Finding Courage

Moving forward, governments need to be “willing to do dangerous things,” says Bray, pointing to project CORONA as a case in point. Launched in 1958 to take photos over the Soviet Union, the program lost its first 13 rockets trying to get the imaging reconnaissance satellite into orbit but eventually captured the film that helped end the Cold War—and later became the basis of Google Earth. 

Organizations may need a “chief courage officer,” agrees Komarny. “The proof-of-concept work takes a lot of courage.”

Pilot projects are a good idea, as was done in the early days of the internet, and need to cover a lot of territory, says Krigsman. “AI affects every part of government, including how citizens interact with government.”

“Multidisciplinary pilot projects are how to reap benefits and get adoption of AI for diversity and skills development,” says Sabine Gerdon, fellow in AI and machine learning with the World Economic Forum’s Centre for the Fourth Industrial Revolution. She advises government agencies to think strategically about opportunities in their country. 

Government also has a big role to play in ensuring the adoption of standards within different agencies and areas, Gerdon says. The World Economic Forum has an AI global consensus platform for the public and private sectors that is closing gaps between different jurisdictions. 

The international organization is already solving some of the challenges, says O’Brien. For example, it has convened stakeholders to co-design guidelines on responsible use of facial recognition technology. It also encourages regulators to certify algorithms fit for purpose rather than issuing a fine after something goes wrong, which could help reduce the risks of AI specific to children. 

Practical Strides

Canada has an ongoing, open-source Algorithmic Impact Assessment project that could serve as a model for how to establish policies around automated decision-making, says Chenok.

Multiple European countries have already established ethical guidelines for AI, says Creely. Even China recently issued the Beijing AI Principles. The Defense Innovation Board is reportedly also talking about AI ethics, he adds, but corporations are all still “all over the place.”

Public-private collaboration in the UK has established some high-level principles for building an ethical framework for artificial intelligence, says Clement-Jones. AI codes of conduct now must be operationalized, and a public procurement policy developed. It would help if more legislators understood AI, he adds.

Japan, to its credit, is urging industrialized nations composing the G10 to work on an agreement regarding data governance to head off the “race to the bottom with AI use of data,” Clement-Jones continues. And in June, the nonprofit Institute of Business Ethics published Corporate Ethics in a Digital Age with practical advice on addressing the challenges of AI from the boardroom.

The cybersecurity framework of the National Institute of Standards and Technology (NIST) could be used by governments around the world, says Chenok. The AI Executive Order issued earlier this year in the U.S. tasked NIST with developing a plan for federal engagement in the development of standards and tools to make AI technologies dependable and trustworthy.

IEEE has a document to address the vocabulary problem and create a family of standards that are context-specific—ranging from the data privacy process to automated facial analysis technology, says Sara Mattingly-Jordan, assistant professor for public administration and policy at Virginia Tech who is also part of the IEEE Global Initiative for Ethical AI. The standards development work (P7000) is part of a broader collaboration between business, academia, and policymakers to publish a comprehensive Ethically Aligned Design text offering guidance for putting principles into practice. Work is underway on the third edition, she reports.

The Organization for Economic Co-operation and Development (OECD) has guidelines based on eight principles—including being transparent and explainable—that could serve as basis for international policy, says Rotenberg. The guidelines have been endorsed by 42 countries, including the U.S., where some of the same goals are being pursued via the executive order. 

Food for Thought

“We may need to consider restricting or prohibiting AI systems where you can’t prove results,” continues Rotenberg. Tighter regulation will be needed for systems used for decision-making about criminal justice than issues such as climate change where agencies worry less about the impact on individuals. 

Government can best serve as a conduit for “human-centered design thinking,” says Bray, and help map personal paths to skills retraining. “People need to know they’re not being replaced but augmented.”

Citizens will ideally have access to retraining throughout their lifetime and have a “personal learning account” where credits accumulate over time rather than over four years, says Clement-Jones. People will be able to send themselves for retraining instead of relying on their employer. 

With AI, “education through doing” is a pattern that can be scaled, suggests Komarny. “That distributes the opportunity.”

AI ethics and cultural perspectives are central to the curriculum of a newly established college of computing at the Massachusetts Institute of Technology (MIT), says Nazli Choucri, professor of political science at the university. That’s the sort of intelligence governments will need as they work to agree on AI activities that are unacceptable. Choucri also believes closing the gap between AI and global policy communities requires separate focus groups of potential users—e.g., climate change, sustainability and strategies for urban development.

Improving AI literacy and encouraging diversity is important, agrees Devin Krotman, director of prize operations at IBM Watson AI XPRIZE. So are efforts to “bridge the gap between the owners [trusted partners] of data and those who use data.”

Team composition also matters, says O’Brien. “Data scientists are the rock stars, but you need the line-of-business folks as well.”

Additionally, government needs to do what it can to foster free-market competition, says Krigsman, noting that consolidation is squeezing out smaller players—particularly in developing countries. Public representatives at the same time need to be “skeptical” about what commercial players are saying. “We need to focus on transparency before we focus on regulation.”

For more information, visit AI World Government.