Law says the Financial World Needs to Open AI’s Black Boxes


Powerful machine-learning methods have taken the tech world by storm in recent years, vastly improving voice and image recognition, machine translation, and many other things.

Now these techniques are poised to upend countless other industries, including the world of finance. But progress may be stymied by a significant problem: it’s often impossible to explain how these “deep learning” algorithms reach a decision.

Adam Wenchel, vice president of machine learning and data innovation at Capital One, says the company would like to use deep learning for all sorts of functions, including deciding who is granted a credit card. But it cannot do that because the law requires companies to explain the reason for any such decision to a prospective customer. Late last year Capital One created a research team, led by Wenchel, dedicated to finding ways of making these computer techniques more explainable.

“Our research is to ensure we can maintain that high bar for explainability as we push into these much more advanced, and inherently more opaque, models,” he says.

Deep learning emerged in the last five years as a powerful way of mimicking human perceptual abilities. The approach involves training a very large neural network to recognize patterns in data. It is loosely inspired by a theory about the way neurons and synapses facilitate learning. Although each simulated neuron is simply a mathematical function, the complexity of these interlinked functions makes the reasoning of a deep network extremely difficult to untangle.

Some other machine-learning techniques, including those that outperform deep learning in certain scenarios, are a lot more transparent. But deep learning, which allows for sophisticated analytics that are useful to the finance industry, can be very difficult to interrogate.

Some startups aim to exploit concerns over the opacity of existing algorithms by promising to use more transparent approaches.

This issue could become more significant over the next few years as deep learning becomes more commonly used and as regulators turn their attention to algorithmic accountability. Starting next year, under its General Data Protection Regulation, the European Union may require any company to be able to explain a decision made by one of its algorithms.

Read the source article at MIT Technology Review.