People-Centered Design For Deep Learning

4274
The design of deep learning systems needs to incorporate transparency, explainability and reversibility to ensure positive results for business.

In an MIT Sloan Management Review article published last week, David A. Bray and Ray Wang outline the challenges ahead for incorporating people-centered design principles for deep learning.

Deep learning, like other types of AI, trains itself, raising questions about accuracy and fairness in the findings. As companies adopt these technologies, “leadership must ensure that artificial neural networks are accurate and precise because poorly tuned networks can affect business decisions and potentially hurt customers, products, and services,” Bray and Wang write.

They advocate for “a people-centered approach to deep learning ethics,” which benefits not just a few individuals, but entire communities. The approach is built on transparency, explainability, and reversibility, they write, which should be the foundation for any AI implementation.

In order to achieve that, Bray and Wang suggest three methods, “to reduce the risk of introducing poorly tuned AI systems and inaccurate or biased decision-making in pilots and implementations.” Companies should create data advocates, establish a mindful monitoring system, and clearly define expectations.

Read their full explanation here.