Machine Learning Platform Powering User Experience
The annual competition of AI software bots for the video game StarCraft was held over the Columbus Day weekend 2017. Facebook quietly entered a bot named CherryPi, designed by eight people with its AI research lab, according to an account in Wired.
Facebook is in the AI game with Google, Amazon and others, although without perhaps the mindshare. This report is a snapshot of where Facebook is at with AI in 2017.
Google got headlines when its DeepMind team’s AlphaGo software defeated a human state champion at the board game Go in May 2017. In August, DeepMind announced that the latest version of the game, StarCraft II, will be its next challenge.
Facebook’s AI research group is led by NYU professor Yann LeCun. The group’s 80 researchers have released several research papers on StarCraft; they are not announcing a dedicated effort to conquer the game.
CherryPi finished sixth in a field of 28, according to the final 2017 StarCraft competition results. The top three bots were made by lone, hobbyist coders, as Wired reported.
Facebook research scientist Gabriel Synnaeve, described CherryPi as a baseline on which to assess research progress. “We wanted to see how it compares to existing bots and test if it has flaws that need correcting,” he told Wired. The competition is part of AIIDE, a AAAI conference now in its 13th year; Facebook supported the event by sponsoring hardware to run the thousands of bot-on-bot games.
StarCraft is more complex than Go or chess, thus it is appealing to the AI adventurists. As Wired reported, the number of valid positions on a Go board is a 1 followed by 170 zeros. Researchers estimate that you would need to add at least 100 more zeros to get into the realm of StarCraft’s complexity.
Four AI Labs at Facebook
Facebook has four AI research labs, under the moniker of FAIR, Facebook AI Research labs: in New York, Paris, Menlo Park and now, Montreal. The newest lab opened in Montreal in the fall of 2017, led by Dr. Joelle Pineau, an expert in the field of reinforcement learning and co-director of the Reasoning and Learning Lab at McGill University’s School of Computer Science. Facebook recently published an interview with Dr. Pineau in the blog section of their AI research website.
Dr. Pineau said she is working on enabling machines to make good decisions, or a series of decisions, in complex, real-world situations, often with incomplete or incorrect information. Reinforcement learning enables machines to learn new tasks through experimentation, feedback and rewards. “If you make several decisions, what is the relationship between all of them? How do you plan out a course of action? How do you enable a machine to learn good strategies to optimize its behavior,” are questions Dr. Pineau is trying to answer with her research. “Reinforcement learning can be applied to tackle this problem. There are many use cases including robotics, transportation, healthcare and dialogue systems.”
In addition to reinforcement learning, Facebook’s Montreal lab will be working on conversational agents, deep learning, optimization, computer vision and video understanding.
Asked why she chose to join FAIR, Dr. Pineau said, “It’s exciting to have the opportunity to work more closely with these fantastic AI researchers. The culture of FAIR is unique. I like the philosophy of open science, to be able to publish our papers and code, and talk about our research openly. Some companies share a little, some don’t at all.”
It also helped that Facebook agreed to put a lab in Montreal and allows Dr. Pineau to remain in academia. Montreal has a tradition in AI learning, with McGill University specializing in machine learning and reinforcement learning, and University of Montreal focused on deep learning.
Applied Machine Learning Group is Busy
Joaquin Quinonero Candela has headed Facebook’s Applied Machine Learning group in Menlo Park since 2012. He came over from Microsoft in Cambridge, England, where he had worked since 2007, and where after a striving individual effort, he succeeded in enhancing the Bing search engine with some AI before its release in 2009.
He was approached by Facebook and was impressed with how much more easily he would be able to turn research into products. Since joining the company, Candela has overseen a transformation of the company’s ad operation, as reported in Wired in an account published in February 2017. He used machine learning to make sponsored posts more relevant and effective.
Speaking to an audience of engineers in at a New York City conference in January 2017, Candela generalized about the importance of AI at Facebook. “I’m going to make a strong statement,” he said. “Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.”
Candela’s AML group is charged with integrating the work of FAIR into Facebook products, and encouraging all of Facebook’s engineers to integrate machine learning into their work.
One day after the US presidential election in November 2016, Facebook CEO Mark Zuckerberg remarked that “it’s crazy” to think that Facebook helped to elect Donald Trump by hosting fake news. Since then Zuckerberg and everybody else has learned a lot about how easy it was for Facebook’s smart ad system to be manipulated to fuel division in the US. Now Facebook’s response to the fake news crisis is relying heavily on machine learning efforts within the company.
The culture of collaboration at Facebook helps it to bridge groups such as iPhone hardware and image rendering to create new products, such as image rendering that can be done on the phone itself. Facebook demonstrated a new product to Wired that redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter, on the phone itself.
“By running complex neural nets on the phone, you’re putting AI in the hands of everybody,” he said. “That does not happen by chance. It’s part of how we’ve actually democratized AI inside the company.”
Facebook engineer Hussein Mehanna, who joined the company at the same time as Candela, impressed on him early the need to enhance the machine learning foundation supporting Facebook’s ad sales. Mehanna, also a Microsoft veteran, is now director of Facebook’s core machine learning group.
The effort was successful. Candela told Wired, “We became incredibly successful at predicting clicks, likes, conversions and so on.” Candela today works closely with the researchers within FAIR.
He breaks down Facebook’s application of AI into four areas: vision, language, speech and camera effects. All of those will lead to a “content understanding engine.” Facebook intends to detect subtle intent from comments, extract nuance from the spoken word, interpret your expressions and map them onto avatars in virtual reality sessions. “We are working on the generalization of AI,” including algorithms that can transfer knowledge from one task to another, he said.
For example, the Social Recommendations feature can go from a user asking friends for restaurant recommendations, to a specific list being presented to the user. In natural language processing, Facebook’s Deep Text feature helps power the machine learning behind translations, which are used by over four billion posts per day.
For images and video, the Facebook AML team has built a machine learning vision platform called Lumos. The idea originated with Manohar Paluri, when he was an intern at FAIR, working on a machine learning vision he calls the visual cortex of Facebook, a means of processing and understanding all the images and videos posted on Facebook.
Today Paluri is working with Candela at AML to help build out Lumos to help Facebook’s engineers working on Instagram, Messenger, WhatsApp and Oculus, use the visual cortex. Longer term, Paluri suggests Facebook will combine the visual cortex with the natural language platform for generalized content understanding.
The Facebook AI teams were called on to help respond to the fake news purge, the all-hands-on-deck effort to rid journalistic hoaxes from Facebook, or at least make them more difficult to execute.
FAIR had produced a tool that is helping called World2Vec, which adds a memory capability to neural nets so that every piece of content has information about its origin and who has shared it. With that information, Facebook hopes to understand the sharing patterns that characterize fake news and potentially root out the hoaxes.
Candela rejects the arguments some have made that powerful machine learning foundations can have unintended consequences that are harmful. “I think that we’ve made the world a much better place,” he told Wired. “The challenge is that AI is really in its infancy still. We’re only getting started.”
- Written and compiled by John P. Desmond, AI Trends Editor