The New York Times Magazine features a beautifully prepared and presented article on the recent dramatic improvements in Google Translate brought about by deep learning technologies developed in the Google Brain division of Alphabet. But this is not simply an article about a breakthrough innovation, it is a nuanced discussion of the history, problems, and approaches, both failed and (sometimes) successful, that lie behind some of the most useful AI-supported facilities we use today.
As an example, author Gideon Lewis-Kraus notes:
“…in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed….
If you wanted something that could adapt, you didn’t want to begin with the indoctrination of the rules of chess. You wanted to begin with very basic abilities — sensory perception and motor control — in the hope that advanced skills would emerge organically. Humans don’t learn to understand language by memorizing dictionaries and grammar books, so why should we possibly expect our computers to do so?
Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I.”
This piece is required reading for anyone interested in understanding cognitive computing from the inside.
Read the source article at Cognitive Computing Consortium.