Three months ago, Google announced it would in early 2017 launch support for high-end graphics processing units (GPUs) for machine learning and other specialized workloads. It’s now early 2017 and, true to its word, Google today officially made GPUs on the Google Cloud Platform available to developers. As expected, these are Nvidia Tesla K80 GPUs, and developers will be able to attach up to eight of these to any custom Compute Engine machine.
These new GPU-based virtual machines are available in three Google data centers: us-east1, asia-east1 and europe-west1. Every K80 core features 2,496 of Nvidia’s stream processorswith 12 GB of GDDR5 memory (the K80 board features two cores and 24 GB of RAM).
You can never have too much compute power when you’re running complex simulations or using a deep learning framework like TensorFlow, Torch, MXNet of Caffee. Google is clearly aiming this new feature at developers who regularly need to spin up clusters of high-end machines to power their machine learning frameworks. The new Google Cloud GPUs are integrated with Google’s Cloud Machine Learning service and its various database and storage platforms.
The cost per GPU is $0.70 per hour in the U.S. and $0.77 in the European and Asian data centers. That’s not cheap, but a Tesla K80 accelerator with two cores and 24 GB of Ram will easily set you back a few thousand dollars, too.
The announcement comes only a few weeks before Google is scheduled to host its Cloud NEXT conference in San Francisco — where chances are we’ll hear quite a bit more about the company’s plans for making its machine learning services available to even more developers.
Read the source article at TechCrunch.