Artificial intelligence isn’t necessarily good for women, but we can make it better.
Because we build and train AI, it reflects our biases and assumptions — and our racism and sexism. That’s problematic as AI can be used everywhere: it controls driverless cars and powers voice assistants such as Siri and Alexa, but also helps HR departments sift through resumes, decides who gets parole, and examines medical images. As its uses get more widespread and important, AI missteps and abuses become more dangerous.
If we don’t get it right, sexism, racism, and other biases will be literally encoded into our lives, trained on incorrect data that continues to leave women and people of color out of decision making. “We’re ending up coding into our society even more bias, and more misogyny and less opportunity for women,” says Tabitha Goldstaub, cofounder of AI startup CognitionX. “We could get transported back to the dark ages, pre-women’s lib, if we don’t get this right.”
What is AI?
AI is made of up of a myriad of different, related technologies that let computers “think” and make decisions, helping us automate tasks. That includes ideas such as neural networks, a machine-learning technique that can be trained on datasets before being set loose to use that knowledge. Show it a bunch of pictures of dogs, and it learns what dogs look like — well, sometimes the machines manage it, other times they can’t tell chihuahuas from muffins.
AI is meant to make our lives easier. It’s good at filtering information and making quick decisions, but if we build it poorly and train it on biased or false data, it could hurt people.
“A lot of people assume that artificial intelligence… is just correct and it has no errors,” says Tess Posner, co-founder of AI4All. “But we know that that’s not true, because there’s been a lot of research lately on these examples of being incorrect and biased in ways that amplify or reflect our existing societal biases.”
How AI can hurt
The impacts can be obvious, such as a resume bot favoring male, “white”-sounding names, but they can also be subtle, says Professor Kathleen Richardson, of the school of computer science and informatics at De Montfort University. “It’s not like we go out into the world and the bank machine doesn’t work for us because we’re female,” she says. “Some things do work for us. It’s more just about the priorities that we start to have as a society. Those priorities, for example, often become the priorities of a small elite.”
For example, researchers from the University of Washington have shown how one image-recognition system had gender bias, associating kitchens and shopping with women and sports with men — a man standing at a stove was labeled as a woman. “The biases that are inherited in our own language and our own society are getting… reflected in these algorithms,” Posner says.
And those biased labels and data are used to make decisions that impact lives. Goldstaub points to research at Carnegie Mellon University that found Google’s recommendation algorithm was more likely to recommend “high-prestige and high-paying jobs to men rather than to women,” while separate research from Boston University showed CV-sifting AI put men at the top of the pile for jobs such as programming.
Consider health care, says Goldstaub. “Men and women have different symptoms when having a heart attack — imagine if you trained an AI to only recognize male symptoms,” she says. “You’d have half the population dying from heart attacks unnecessarily.” It’s happened before: crash test dummies for cars were designed after men; female drivers were 47% more likely to be seriously hurt in accidents. Regulators only started to require car makers to use dummies based on female bodies in 2011.
“It’s a good example of what happens if we don’t have diversity in our training sets,” says Goldstaub. “When it comes to health care, it’s life or death — not getting a job is awful, but health care is life or death.”
There’s one obvious way to encourage better systems, says Richardson: “we need more women in robots and AI.”
Right now that’s not happening. According to the AAUW, only 26% of computing professionals in the U.S. are women — and there used to be more. Back in the 1990s, more than a third of those working in tech were female. According to Google’s own diversity figures in 2017, 31 percent of its workforce is women, but only 20 percent of its technical roles are filled by women. And, only 1 percent of all their tech employees (of any gender) are black; 3 percent are hispanic. For AI in particular, Goldstaub suggests about 13 percent of those working in AI are women.
“I believe as a feminist the more women we can get into roles, the more diverse the output will be — and fewer shockers will get through,” Goldstaub says.
Thankfully, groups such as AI4ALL have sprung up to help women step into careers in AI by encouraging high school students to take science, technology, engineering, and math (STEM) subjects. “When we look at the research about underrepresented populations and why they don’t go into the field, a lot of the research shows this actually stems back in high school at around age 15, which is when folks get discouraged or lose interest in STEM fields,” says Posner.
Read the source article in teenVogue.