ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.
But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.
Warnings that AI and machine learning systems are being trained using “bad data” abound. The oft-touted solution is to ensure that humans train the systems with unbiased data, meaning that humans need to avoid bias themselves. But that would mean tech companies are training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it. Has anyone stopped to ask whether the humans that feed the machines really understand what bias means?
Discussing Bias at Facebook After Decade as CIA Officer
Companies such as Facebook—my former employer—Google, and Twitter have repeatedly come under attack for a variety of bias-laden algorithms. In response to these legitimate fears, their leaders have vowed to do internal audits and assert that they will combat this exponential threat. Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.
In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.
Over more than a decade working as a CIA officer, I went through months of training and routine retraining on structural methods for checking assumptions and understanding cognitive biases. It is one of the most important skills for an intelligence officer to develop. Analysts and operatives must hone the ability to test assumptions and do the uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events. They must also examine the biases of those providing information—assets, foreign governments, media, adversaries—to collectors.
This kind of training has traditionally been reserved for those in fields requiring critical analytic thinking and, to the best of my knowledge and experience, is less common in technical fields. While tech companies often have mandatory “managing bias” training to help with diversity and inclusion issues, I did not see any such training on the field of cognitive bias and decision making, particularly as it relates to how products and processes are built and secured.
Facebook Culture At Odds With Structured Analytic Techniques
Judging by some of the ideas batted around by my Facebook colleagues, none of the things I had spent years doing—structured analytic techniques, weighing evidence, not jumping to conclusions, challenging assumptions—were normal practice, even when it came to solving for the real-world consequences of the products they were building. In large part, the “move fast” culture is antithetical to these techniques, as they require slowing down when facing important decisions.
Several seemingly small, but concerning examples from my time at Facebook demonstrate that, despite well-meaning intentions, these companies are missing the boat. In preparation for the 2018 US midterm elections, we asked our teams whether there was any risk that we would be accused of an anti-conservative bias in our political ads integrity policies. Some of the solutions they proposed showed that they had no idea of how to actually identify or measure bias. One program manager suggested doing a straight data comparison of how many liberal or conservative ads were rejected—no other analysts or PMs flagged this as problematic. My explanations of the inherent faults in this idea did not seem to dissuade them that this would not, in fact, prove a lack of bias.
In other exercises, employees would sometimes mischaracterize ads based on their own inherent biases. In one glaring example, an associate mistakenly categorized a pro-LGBT ad run by a conservative group as an anti-LGBT ad. When I pointed out that she had let her assumptions about conservative groups’ opinions on LGBT issues lead to incorrect labeling, my response was met by silence up and down the chain. These mischaracterizations are incorporated into manuals that train both human reviewers and machines.