The Race To Build An AI Chip For Everything Just Got Real

1954

Yann LeCun once built an AI chip called ANNA. But he was 25 years ahead of his time.

The year was 1992, and LeCun was a researcher at Bell Labs, the iconic R&D lab outside New York City. He and several other researchers designed this chip to run deep neural networks—complex mathematical systems that can learn tasks on their own by analyzing vast amounts of data—but ANNA never reached the mass market. Neural networks were pretty good at recognizing letters and numbers scrawled onto personal checks and envelopes, but they didn’t work all that well when performing other tasks, at least not in any practical sense.

Today, however, neural networks are rapidly transforming the internet’s biggest players, including Google, Facebook, and Microsoft. LeCun now oversees the central artificial intelligence lab inside Facebook, where neural networks identify faces and objects in photos, translate from one language to another, and so much more. Twenty-five years later, LeCun says, the market very much needs chips like ANNA. And these chips will soon arrive in large numbers.

Google recently built its own AI chip, called the TPU, and this is widely deployed inside the massive data centers that underpin the company’s online empire. There, packed into machines by the thousands, the TPU helps with everything from identifying commands spoken into Android smartphones to choosing results on the Google search engine. But this is just the start of a much bigger wave. As CNBC revealed last week, several of the original engineers behind the Google TPU are now working to build similar chips at a stealth startup called Groq, and the big-name commercial chip makers, including Intel, IBM, and Qualcomm, are pushing in the same direction.

Companies like Google, Facebook, and Microsoft can still run their neural networks on standard computer chips, known as CPUs. But since CPUs are designed as all-purpose processors, this is terribly inefficient. Neural networks can run faster and consume less power when paired with chips specifically designed to handle the massive array of mathematical calculations these AI systems require. Google says that in rolling out its TPU chip, it saved the cost of building about 15 extra data centers. Now, as companies like Google and Facebook push neural networks onto phones and VR headsets—so they can eliminate the delay that comes when shuttling images to distant data centers—they need AI chips that can run on personal devices, too. “There is a lot of headroom there for even more specialized chips that are even more efficient,” LeCun says.

Read the source article at Wired.com.