Facial Recognition Software Facing Challenges And Seeing Some Success

2682
Facial recognition began to be used to speed boarding at Washington’s Dulles International Airport in 2018.

By John P. Desmond, AI Trends Editor

Facial recognition systems are seeing pushback. The city of San Francisco has banned the user of face recognition software by city agencies; other cities are considering similar restrictions. An MIT scientist has challenged tech leaders after finding bias in image datasets; some have disputed the findings, but she is shining a light on permissions or lack of permissions around image collection practices.  

Image recognition research continues and is leading to some positive results.  A Pittsburgh company is achieving success with use of facial recognition to battle sex trafficking. AI researchers continue to improve recognition rates. 

Here is a snapshot update of a controversial area of AI today – facial recognition.

First, a success story: Marinus Analytics, a social impact startup in Pittsburgh, is using facial recognition technology to help authorities catch leaders of human trafficking rings. Early in 2012, Emily Kennedy wrote her undergraduate thesis at Carnegie Mellon University on the popularity of the Internet for human traffickers in the sex trade. In 2014, she co-founded Marinus to develop its technology and make it useful to authorities. 

As described in a recent account in the Pittsburgh Post-Gazzette, the FBI in January took down a ring of sex trafficking operations in the US, Canada and Australia. A federal grand jury in Oregon indicted six individuals alleged to have run an organization that illegally recruited women from China to engage in prosecution. The National Cyber Forensics and Training Alliance of Oakland, which helps industry, academia and law enforcement mitigate cyber threats, assisted in the case. The FBI seized 500 web domains and shut them down. 

Marinus Analytics claimed that it generated the original lead for this case with help from Traffic Jam, its facial recognition tool. The NCFTA did not confirm it but did note that working with the private sector can aid authorities. 

Marinus collected millions of publicly-available images from websites where prostitution services are advertised. These became the data points for its racial recognition platform using AI. The image of a missing person can be checked against the database for similarities or a match. The company says its Traffic Jam tool saves investigators time and has achieved an 88 percent success rate. 

Cara Jones, CEO of Marinus, stated, “The software connects and finds needles in the haystack. Through computer vision, we can connect content on these sites, based on the face or broader similarity in the background or foreground.”

Cara Jones, CEO, Marinus Analytics

The startup is funded by a grant of over $900,000 issued in 2017 from the National Science Foundation.  The grant supports Marinus in development of its machine learning technology to provide law enforcement with real-time, easy to access information. The project is expected to end by Sept. 30 of this year. Marinus is cited as a case history on Amazon’s Rekognition site. 

Market Seen Nearing $10 Billion by 2022

The facial recognition market is expected to generate $9.6 billion in revenue by 2022, according to Allied Market Research, with growth exceeding 21% per year. Use in law enforcement is a primary driver, but facial recognition is also preferred over other biometric technologies such as voice recognition and fingerprint scanning, due to its non-contact process. 

The top three application areas for facial recognition are: law enforcement, health and marketing/retail. 

In the United States, half of states allow law enforcement to run searches against their databases of driver’s license and ID photos. The FBI has access to driver’s license photos of 18 states. Drones with cameras can be used to apply facial recognition to large areas during mass events.  

Facial recognition is tightly controlled in Europe and has had some success. In 2017, Gemalto supplied new automated control gates for Roissy Charles de Gaulle airport in Paris. The system was intended to evolve from fingerprint recognition to facial recognition. The man responsible for the 2016 Brussels terror attacks was said to be identified with FBI facial recognition software. 

Medication and Pain Management Among Health Applications

In health, facial recognition is being used to detect genetic diseases such as DiGeorge syndrome, with a success rate of over 96 percent, and to help track medication more accurately and support pain management.

Founded in 2010, AiCure is an AI company using facial recognition technology and computer vision to improve medication adherence practices. Described in a recent issue of Emerj, the company’s algorithm-driven software is delivered through an app on mobile devices. The app can identify the patient and the prescribed drug, and visually confirm if the drug has been ingested by the patient.

In a 2017 pilot study of 75 individuals over a period of 24 weeks, AiCure reported a 89.7 percent drug adherence rate compared to a 71.9 percent rate with a traditional drug adherence monitoring method. These results provide a window into how facial recognition technology could possibly impact healthcare outcomes and the economy.

An estimated 100 million Americans suffer from chronic pain, a figure equivalent to roughly one-third of the U.S. population. The annual associated medical costs have been approximated up to $630 billion. Globally, the figure is estimated at 1.5 billion individuals suffering from some degree of chronic pain.

In the healthcare setting, performing an accurate assessment of patient’s pain level is an imperfect science. Certain traditional pain assessment methods rely significantly on a patient’s description of his or her pain level. While non-verbal pain assessment tools are available, inherent bias, reliability and sensitivity are some of the challenges encountered regardless of the tool.

The facial recognition technology platform ePAT is a point of care app designed to detect facial expression nuances which are associated with pain. App users can also reportedly enter data on “non-facial pain cues” such as “vocalizations, movements and behaviours” which are then aggregated to provide a pain severity score.

Competitive Ground for Technology Giants 

Facial recognition is a competitive market for the technology giants. Facebook announced its DeepFace program in 2014. Google announced FaceNet in June 2015; the technology is incorporated into Google Photos, to automatically tag them based on people recognized.

A study done by MIT researchers​ in February 2018 found that Microsoft, IBM and China-based Megvii  (FACE++) tools had high error rates when identifying darker-skin women compared to lighter-skin men. Microsoft announced in June 2018 that it had made improvements to this biased facial recognition technology.

At  the end of June, Microsoft announced in a blog post that it had made solid improvements to its biased facial recognition technology. 

Amazon promotes its cloud-based face recognition service named Rekognition to law enforcement agencies. The technology is said to be able to recognition up to 100 people in a single image, and perform matches against databases containing tens of millions of faces. In July 2018, Newsweek reported that 28 members of the US Congress were identified as people arrested for crimes by Rekognition. Amazon responded that the confidence levels needed to be adjusted in the way the technology was applied, to achieve valid results.

Permission Granted?

The MIT researcher who detected bias in the Microsoft face recognition technology, Joy Buolamwini, has gotten the attention of the technology giants, members of Congress and AI scholars who are defending her work. 

MIT researcher Joy Buolamwini has found racial and gender bias in facial analysis tools sold by companies that have a hard time recognizing certain faces, especially darker-skinned women. Credit: Steven Senne, Associated Press file.

“There needs to be a choice,” said Buolamwini, a graduate student and researcher at MIT’s Media Lab, in a recent account in the Denver Post.  “Right now, what’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”

Amazon has challenged what it called Buolamwini’s “erroneous claims” and said the study confused facial analysis with facial recognition, improperly measuring the former with techniques for evaluating the latter.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, general manager of artificial intelligence for Amazon’s cloud-computing division, wrote in a January blog post. 

Buolamwini, who’s also founded a coalition of scholars, activists and others called the Algorithmic Justice League, has blended her scholarly investigations with activism.

Buolamwini said a major message of her research is that AI systems need to be carefully reviewed and consistently monitored if they’re going to be used on the public. Not just to audit for accuracy, she said, but to ensure face recognition isn’t abused to violate privacy or cause other harms.

“We can’t just leave it to companies alone to do these kinds of checks,” she said.

[Ed. Note: We plan continued coverage of this evolving area of AI technology.]

Read the source articles in the Pittsburgh Post-Gazette, on biometric research from Gemalto, an account in  Emerj, and in the Denver Post. and on Amazon’s Rekognition site.