IBM Research: Many AI Systems Are Trained Using Biased Data

873

AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful.

A crucial principle, for both humans and machines, is to avoid bias and therefore prevent discrimination. Bias in AI system mainly occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to develop algorithms that can be easily explained.

As humans and AI increasingly work together to make decisions, researchers are looking at ways to ensure human bias does not affect the data or algorithms used to inform those decisions.

The MIT-IBM Watson AI Lab’s efforts on shared prosperity are drawing on recent advances in AI and computational cognitive modeling, such as contractual approaches to ethics, to describe principles that people use in decision-making and determine how human minds apply them. The goal is to build machines that apply certain human values and principles in decision-making. IBM scientists also devised an independent bias rating system can determine the fairness of an AI system.

Identifying and mitigating bias in AI systems is essential to building trust between humans and machines that learn. As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views. In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.

Read the source post at IBM Research.