Google Rules AI, with TensorFlow at Foundation, Leadership in Core Products

3891
Google is a dominant player in AI on top of its leading core products shown here and a foundation in TensorFlow for software development. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

The way Google came from nowhere with the launch of Android in 2007 to today dominating the smartphone operating system market, is what the company is doing now with AI, some market observers suggest.

Google now has an 80 percent share of the worldwide smartphone OS market, and it has seeded the AI market by making its TensorFlow software library open source, putting it at the foundation of many AI applications, suggests a recent account in Analytics Insight.

Some 50 Google products use TensorFlow to build deep learning applications to help differentiate companions in Photos to refinements in the core search engine. Google has become a machine learning organization.

The authors state, “Google has gone through the most recent three years constructing a gigantic platform for artificial intelligence and now they’re unleashing it on the world.”

Differential Privacy Library Aims to Enhance Personal Security

Among recent releases is a “differential privacy library” – used to protect personal data while scanning massive volumes of data. Google wants to engage the development community in this new discussion about privacy protection. The “differential” works to cryptographically mask private information while drawing information from datasets.

By publicly releasing the library on GitHub, development companies with startup assets can explore a rigorous approach to privacy. Healthcare could be an interested segment.

“Differential security is a high-affirmation, analytic means of ensuring that use cases are addressed in a privacy-preserving manner,” stated Miguel Guevara, a Google product manager in the privacy and protection office, in a blog post. Health care researchers may for example want to look at the average amount of time patients spend at different clinics, to see if there are differences in care.

Miguel Guevara, leading product development for differential privacy, Google

AI Can Be Expensive

This work costs some money. A joint project of Carnegie Mellon University and Google to build XLNet, a new language model, generated a discussion about how much the services cost. Eliot Turner, entrepreneur, AI expert and now co-founder of Hologram AI, estimated that it cost the university $245,000 for 2.5 days to train the XLNet model. That was based on a resource breakdown that he outlined, according to an account in Medium.

That number was challenged by Google researchers who specified different resources and arrived at an estimate of $61,440 for 2.5 days. Also likely is the Google team did not charge full price, since it was leading the project.

XLNet was said to outperform the previous state-of-the-art (SoTA) for language tasks, called BERT (Bidirectional Encoder Representations from Transformers). XLNet achieved SoTA results on 18 of 20 language tasks. The model is big, thus expensive to run.

In another project, the University of Washington and the Allen Institute for AI in May 2019 developed Grover, a 1.5-billion-parameter neural net, tailored to detect fake news. Recently outsourced to Github, training for the Grover model cost a reported $25,000.

The GPT-2 language model recently developed by OpenAI, demonstrates impressive performance across a range of language tasks, such as machine translation, question answering, reading comprehension and summarization. The computer power required to run the model for training costs $256/hour.

Many machine learning models are running on smaller footprints. Computer scientist Yoshua Bengio, Turing Award winner and scientific director of MILA (Montreal Institute of Learning Algorithm), was quoted as saying, “Some models are so big that even in MILA (Montreal Institute of Learning Algorithm) we can’t run them because we don’t have the infrastructure for that. Only a few companies can run these very big models they’re talking about.”

In a recent financial report on Google’s business from its parent, Alphabet, notes that Google’s core products such as Search, Android, Maps, Chrome, YouTube, Google Play and Gmail each have over one billion monthly active users. “We believe we are just beginning to scratch the surface,” the report stated, in an account from Strategic Management Insight.

Google is a leader in acquisitions as well, making 118 acquisitions between 2012 and 2015, far outpacing Microsoft, Facebook and Apple.

Google’s revenue is generated by performance and brand advertising, and machine learning and AI are driving the company’s latest innovations, the report notes.

Google sees challenges to its business coming from general purpose search engines (Baidu, Bing, Yahoo), vertical search engines and e-commerce websites (Amazon, eBay, LinkedIn), social networks (Facebook, Twitter); providers of digital video services, enterprise cloud services and digital assistant providers.

One of the best sources of information about what is happening with AI at Google is of course Google itself. The Google AIBlog for example offers accounts of research written by participating engineers.

In one example, A Scalable Approach to Reducing Gender Bias in Google Translate, describes a project to provide gender-specific translations, male and female. The approach was tried with Turkish-to-English, and has recently been expanded to English-to-Spanish.

“We’ve made significant progress since our initial launch by increasing the quality of gender-specific translations,” the researchers stated on the blog, adding, “We are committed to further addressing gender bias in Google Translate and plan to extend this work to document-level translation, as well.”

Read the source articles in Analytics InsightMedium, Strategic Management Insight the Google AIBlog.