IDC: Legislation to Ban Use of Facial Recognition Could Restrict Public Sector Innovation

1700

By Jeff Orr, AI World Conference Content Director

The City of San Francisco recently passed legislation banning the use of facial recognition by city agencies and its services (including law enforcement). Public safety agencies face an ever-increasing volume of information scaling at a rate that outpaces the human capability to analyze it. AI Trends asked Ruthbea Yesner, Vice President, Government Insights and Smart Cities and the team of analysts at IDC Government Insights, including Alison Brooks, Adelaide O’Brien, and Shawn McCarthy, to share their perspective on the potential impact that this and other in-process legislation could have on the use of intelligent automation and AI by public agencies.  

Police investigations have become more complicated and onerous with the skyrocketing video and image volumes increasingly captured on smartphones and easily distributed by online. The type of video sources now that are regularly involved in police investigations include body-worn cameras, in-car police cameras, proprietary dashcams, closed-circuit television systems (resident owned, city owned, and commercial sources), mobile phones, internet videos posted on social media and, most recently, drone video.

Hence there is a need to proactively, efficiently, and autonomously manage video volumes through AI of which one tool is facial recognition software. Facial recognition software has been extraordinarily useful to law enforcement agencies seeking to sift through enormous amounts of data quickly.

As AI and facial recognition technology development continues to outpace the regulatory environment, there are urgent calls from technology providers, government agencies, privacy advocates, and police agencies alike to frame the appropriate legal, policy, and ethical environments to proactively and thoughtfully guide technology deployment. Recently, employees at Microsoft, Facebook, Google, Salesforce, and Amazon Web Services have gone public with concerns about the indiscriminate, unregulated use of facial recognition software by law enforcement, equating its adoption with the rise of a technology-enabled surveillance state. Some privacy advocates have called on technology providers to halt development entirely until these issues can be addressed. The result of these constituent concerns has given rise to the recent legislation banning facial recognition software in San Francisco, with several additional cities and states also considering enacting similar laws.

Technology Concerns Lead to Rise in Privacy Legislation

Technology concerns centers around the following:

  • Algorithmic bias. A number of studies and agencies have pointed to the racial and gender bias in the advanced algorithms underpinning facial recognition software. Facial recognition accuracy depends on the data feeding the artificial intelligence algorithms learning from it; the Chinese company Megvii, for example, very accurately identifies Chinese people but had 35% error rates with darker-faced individuals. While this can lead to false identification, the bigger issue with algorithmic bias is in crime analytics as this bias tends to confirm existing stereotypical biases related to ethnicity, gender, and age. This has led to calls for better algorithmic accountability and transparency and algorithmic impact assessments. MIT’s Joy Buolamwini has been leading research in this area. For more information, see www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/.
  • Transparency concerns. The stealth-like nature of facial recognition software means that without following established protocols, citizens might not be aware that they are being tracked. The Boston Globe recently broke a story about the United States’ Transportation Security Administration’s “Quiet Skies” program stealthily tracking airport passengers, regardless of risk factors or alerts (available at apps.bostonglobe.com/news/nation/graphics/2018/07/tsa-quiet-skies/). Many consider the data and its analysis to be covert or “black,” in which the data is hidden within the algorithmic compute power.
  • Privacy and sharing captured images and facial recognition data. Privacy advocates are concerned with the increasingly pervasive and broad-sweeping surveillance of daily citizen activities and the misuse of biometric data without the appropriate policies on use, data-sharing and storage. Some police agencies have a track record of misuse in terms of their deployment of increasingly advanced technologies, despite established legal and policy frameworks for appropriate use.

IDC Recommendations to Address Constituent Concerns

IDC recommends public sector agencies consider the following recommendations:

  • Diversify the data. Much of the bias in AI solutions exists because the data sets used to train the solutions are limited in terms of volume or skewed in terms of gender, age, or ethnic diversity. Agencies implementing facial recognition solutions should work with vendors that have taken the appropriate steps to use diversity-based data sets; agencies should also steer away from “black box” solutions, mass-market solutions that are both untested or unverified for bias. IBM recently has released two massive public data sets that it hopes will eliminate bias in facial recognition algorithms by providing a more diverse population, in terms of race, gender, and age, from which to train software.
  • Use algorithmic impact assessment tools. Algorithmic impact assessments (AIAs) help agencies independently assess the claims made by vendor solutions and evaluate acceptable use. AIA includes conducting self-assessments of existing and proposed solutions for fairness, justice, and bias; involving external research review teams and processes to adjudicate developments; notifying and soliciting feedback from the public about deployment intentions; and creating a process for redress. Accenture has developed a “Fairness Tool” that scans algorithms and data for biases. The AI Now Institute has developed an AIA framework that it hopes will be widely leveraged by agencies and technology vendors alike.
  • Establish and endorse standards for acceptable use. Cities will need to create standards to trace the origin, iteration, and acceptable use of AI solutions, data sets, and surveillance deployments. Vendors should also articulate their AI policies. Amazon has published a set of facial surveillance guidelines for acceptable use that account for human rights and privacy protections.
  • Look beyond government-only data. Facial recognition also exists outside of the realm of government. People or companies can obtain facial recognition solutions from several sources. Thus, laws that only address how governments use facial recognition may miss the bigger picture of how this technology is being used across society. This includes how government is monitored if it buys or uses facial data from external data sources.

Next Step: AI World Government, June 24-26, Washington D.C.

Ethical and responsible uses of AI, data governance, security, privacy and trust, and compliance with legislation are featured on the AI World Government conference & expo agenda. The discussion takes place in Washington D.C. from June 24-26, 2019. Register now to hear from IDC’s analysts and public sector officials exploring these topics and more.

Learn more at AI World Government.