Chipmakers Racing to Build Hardware for Artificial Intelligence

308

Intel, Nvidia, and a host of startups are rolling out new AI chips to make machine learning faster and more powerful.

In recent years, advanced machine learning techniques have enabled computers to recognize objects in images, understand commands from spoken sentences, and translate written language.

But while consumer products like Apple’s Siri and Google Translate might operate in real time, actually building the complex mathematical models these tools rely on can take traditional computers large amounts of time, energy, and processing power. As a result, chipmakers like Intel, graphics powerhouse Nvidia, mobile computing kingpin Qualcomm, and a number of startups are racing to develop specialized hardware to make modern deep learning significantly cheaper and faster.

The importance of such chips for developing and training new AI algorithms quickly cannot be understated, according to some AI researchers. “Instead of months, it could be days,” Nvidia CEO Jen-Hsun Huang said in a November earnings call, discussing the time required to train a computer to do a new task. “It’s essentially like having a time machine.”

Read the source article at Fast Company.