Facebook, Battling Hate Speech with AI, Gets a Critical Review of its Civil Rights Practices

6971
Efforts at Facebook to employ AI to combat hate speech are making progress, but a civil rights report issued last week was critical of the company. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

Facebook’s efforts to employ AI to identify hate speech and other objectionable content is credited for making progress, but a critical report issued last week by independent auditors on the state of civil rights at Facebook highlighted the tension between free expression and hate speech at the social media giant.

The audit was commissioned by Facebook at the urging of civil rights leaders, and it comes in the midst of a growing advertiser boycott of the platform called Stop Hate for Profit. That effort is led by civil rights groups including the NAACP, the Anti-Defamation League and Color of Change. More than 500  companies had signed on when the boycott was announced on July 1.

The report challenged Facebook to do more to resolve the conflict between its promises on civil rights and its inflexible commitment to free expression, according to a report in Vox. “For a 21st century American corporation, and for Facebook, a social media company that has so much influence over our daily lives, the lack of clarity about the relationship between those two values is devastating,” stated lead auditor Laura W. Murphy in the report’s introduction. “It will require hard balancing, but that kind of balancing of rights and interests has been part of the American dialogue since its founding and there is no reason that Facebook cannot harmonize those values, if it really wants to do so.”

Laura Murphy, lead auditor of Facebook civil rights practices

With the publication of this report, Facebook announced that it is creating a senior vice president on civil rights leadership role. The Stop Hate for Profit group had a meeting with Facebook CEO Mark Zuckerberg on July 7; in a statement issued after the meeting, the group expressed disappointment that Facebook had not committed to having the civil rights position be at the C-suite level and had not responded to its other nine recommendations.

Facebook’s Aggression in Taking Down COVID-19 Misinformation a Catch-22

As part of its COVID-19 response, Facebook has been aggressive in taking down misinformation, which has resulted in a Catch-22. The report stated, “Facebook has no qualms about reining in speech by the proponents of the anti-vaccination movement, or limiting misinformation about COVID-19, but when it comes to voting, Facebook has been far too reluctant to adopt strong rules to limit misinformation and voter suppression.”

Meanwhile, AI is being employed widely at Facebook to try to combat hate speech. The company reported that as of March, the AI tools helped to remove 89% of hate speech from the platform before users reported it, up from 65% a year earlier, according to an account in WSJPro.

Facebook Chief AI Scientist Yann LeCun stated in a March interview that he is working to develop self-supervised AI that can help the human reviewers, including in multiple languages. “Current machines don’t have common sense,” he stated. “They have very limited and narrow function.”

Facebook formed a Dangerous Organizations team to focus on terrorists and other organized hate groups, after the March 15, 2019 attack in Christchurch, New Zealand that was live-streamed. The unit has 350 people, is headed by counter terrorism experts, and uses a combination of manual review and automated tools. It has been challenging to reorient those tools toward white supremacists, who tend to be more fragmented and whose speech may overlap with right-wing political speech.

The dynamics make judgments about takedowns “much harder to reach and much harder to reach in real time. That’s the challenge that the companies face,” stated Nicholas Rasmussen, executive director of the Global Internet Forum to Counter Terrorism, a partnership between governments and tech companies including Facebook, Twitter Inc. and Microsoft Corp.

The killing of George Floyd while in police custody in May 2020 has led to a national dialogue on race and more criticisms of Facebook’s approach to content moderation. “We have made real progress over the years,” stated Chief Operating Officer Sheryl Sandberg in a recent blog post responding to the civil rights audit. “But this work is never finished, and we know what a big responsibility Facebook has to get better at finding and removing hateful content.”

Facebook Tech Getting Better at Finding and Removing Hate Speech

Facebook turned to AI to help remove COVID-19 misinformation and for-profit ventures it deems inappropriate, including ads selling face masks and hand sanitizer. The company put warning labels on 50 million posts in April for possible misinformation around COVID-19, and since March the company has removed 2.5 million pieces of content that violated rules around selling personal protective equipment or coronavirus test kits, according to an account in Fortune.

The system is finding and removing more hate speech—9.6 million pieces of content were removed in the first three months of 2020, 3.9 million more than in the previous three months, the company reported. Mike Schroepfer, Facebook’s chief technology officer, attributed the increase to the company getting better at finding hateful content. “I think this is clearly attributable to technological advances,” he stated.

Mike Schroepfer, Chief Technology Officer, Facebook

The company has developed a system called XLM-R that trained on two terabytes of data, about the equivalent of all the words in a half a million 300-page books. It has learned a statistical map of those words across multiple languages. The hope is that commonalities between hate speech in any language will help identify it.

CEO Zuckerberg has promised that machine learning and AI will enable the company to combat the spread of hate speech, terrorist propaganda, and political misinformation across its platforms. “We are not naive,” Schroepfer stated. “AI is not the solution to every single problem, and we believe that humans will be in the loop for the foreseeable future.”

The intent of XLM-R and similar systems is to make the job of human content moderators easier and less repetitive. The work has changed with the onset of social distancing measures to combat the coronavirus spread, with many of the moderators who are contractors now working remotely.

“We want people making the final decisions, especially when the situation is nuanced,” Schroepfer stated. “But we want to give people we work with every day power tools.”

For another system to combat hate speech, Facebook has created a dataset of 10,000 memes that were determined to be a part of hate speech campaigns. It is making the dataset freely available to researchers interested in building AI systems capable of detecting hateful memes. The company has created the Hateful Memes Challenge, with a prize pool of $100,000, to find the best hateful meme detection software. To enter the contest, researchers must commit to open-sourcing their algorithms.

It’s a difficult task Facebook’s researchers created several systems and tested them against the dataset of text and images, achieving 63 percent accuracy. Human reviewers were about 85 percent accurate.

Read the source articles in Vox, WSJPro, Fortune, a statement from the Stop Hate for Profit Group, and information on Facebook’s Hateful Memes Challenge.