‘Godfathers of AI’ Receive Turing Award, the Nobel Prize of Computing

From left to right: Yann LeCun | Photo: Facebook; Geoffrey Hinton | Photo: Google; Yoshua Bengio | Photo: Botler AI

The 2018 Turing Award, known as the “Nobel Prize of computing,” has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

In fact, you probably interacted with the descendants of Bengio, Hinton, and LeCun’s algorithms today — whether that was the facial recognition system that unlocked your phone, or the AI language model that suggested what to write in your last email.

All three have since taken up prominent places in the AI research ecosystem, straddling academia and industry. Hinton splits his time between Google and the University of Toronto; Bengio is a professor at the University of Montreal and started an AI company called Element AI; while LeCun is Facebook’s chief AI scientist and a professor at NYU.

“It’s a great honor,” LeCun told The Verge. “As good as it gets in computer science. It’s an even better feeling that it’s shared with my friends Yoshua and Geoff.”

Jeff Dean, Google’s head of AI, praised the trio’s achievements. “Deep neural networks are responsible for some of the greatest advances in modern computer science,” said Dean in a statement. “At the heart of this progress are fundamental techniques developed by this year’s Turing Award winners, Yoshua Bengio, Geoff Hinton, and Yann LeCun.”

The trio’s achievements are particularly notable as they kept the faith in artificial intelligence at a time when the technology’s prospects were dismal.

I is well-known for its cycles of boom and bust, and the issue of hype is as old as the field itself. When research fails to meet inflated expectations it creates a freeze in funding and interest known as an “AI winter.” It was at the tail end of one such winter in the late 1980s that Bengio, Hinton, and LeCun began exchanging ideas and working on related problems. These included neural networks — computer programs made from connected digital neurons that have become a key building block for modern AI.

“There was a dark period between the mid-90s and early-to-mid-2000s when it was impossible to publish research on neural nets, because the community had lost interest in it,” says LeCun. “In fact, it had a bad rep. It was a bit taboo.”

The trio decided they needed to rekindle interest, and secured funding from the Canadian government to sponsor a loose hub of interrelated research. “We organized regular meetings, regular workshops, and summer schools for our students,” says LeCun. “That created a small community that […] around 2012, 2013 really exploded.”

During this period, the three showed that neural nets could achieve strong results on tasks like character recognition. But the rest of the research world did not pay attention until 2012, when a team led by Hinton took on a well-known AI benchmark called ImageNet. Researchers had so far only delivered incremental improvements on this object recognition challenge, but Hinton and his students smashed the next-best algorithm by more than 40 percent with the help of neural networks.

“The difference there was so great that a lot of people, you could see a big switch in their head going ‘clunk,’” says LeCun. “Now they were convinced.”

Read the source article in The Verge.