Amazon started offering Amazon Web Services in 2007. Today AWS accounts for billions in revenue, is a prime driver of the company’s stock value and includes customers from startups to behemoths including Netflix and Spotify.
At AWS Summit in New York in mid-August, Amazon announced products including Macie, which can scan the files you have on AWS and tell you if any sensitive data — such as social security or credit card numbers — has been exposed to the outside world.
In the time AWS has been on the market, Amazon and its customers have gained experience in machine learning. A recently summary of Amazon’s efforts in AI in Barron’s noted these Amazon customer experiences with AWS AI:
Consumer health software provider ZocDoc has developed a machine vision application; InstaCart, a grocery delivery services, uses AWS AI to know where produce is at local stores; Pinterest, the image-sharing service, uses it to classify images; StitchFix, a fashion startup, uses it to help predict fashion trends.
In an interview with Barron’s, Matt Wood, head of product management for deep learning efforts within Amazon AWS, offered his perspective on AI within Amazon. Wood got a medical degree, then a PhD in machine learning. He worked on the Human Genome Project on understanding protein folding, where he gained experience in machine learning. He joined Amazon in 2010 in the European offices.
Like the overall AWS mission, the Amazon mission with AI is “to put tech in the hands of as many developers as possible,” Wood said. “We’re taking the exact same approach with machine learning as we took with Web services.”
AWS offers levels of machine learning for scientists, from tools to let customers build their own data models, to application program interfaces (APIs) that developers use to link to AI services such as speech recognition or chatbots.
“Five or ten years ago, it was quite a task to build a website. Today, anyone can build one,” Wood said. Amazon’s goal is to make machine learning and access to AI services simpler.
As the AI learns, it gets better at being able to infer. For example, Amazon’s Echo smart home device is using a form of inference when it wakes up in response to the command word “Alexa.” At that point, the device makes a connection to Amazon’s cloud, where the AI lives.
Jeff Bezos, CEO and founder of Amazon, speaking at the Internet Association meeting in May 2017, as quoted in Wired, said “A lot of the value we’re getting from machine learning is actually happening beneath the surface. It is things such as improved search results, improved product recommendations, improved forecasting for inventory management and hundreds of other things.”
Bezos calls Alexa, the intelligent personal assistant introduced in November 2014, part of the “golden age” of AI.
Natural Language Development
Amazon approached Rohit Prasad, current head scientist, in 2013 about creating voice-activated AI. Prasad had spent his career working in natural language and speech recognition at BBN Technologies; his clients included the Defense Advanced Research Projects Agency.
“My eyes lit up,” he told Wired. “For a long time in speech and language, we said the ultimate application is when you are liberated from your eyes and hands. I was up to the challenge.”
Amazon at this time acquired two AI startups, YAP from North Caroline and Evi from Cambridge, MA. They have helped to lay the foundation for Alexa’s voice technology.
When Echo was launched in 2014, it was an instant hit. Amazon has sold tens of millions of Alexa-enabled devices since. “It was fundamentally new and different. No one had done that before,” said Mike George, Amazon’s vice president of Echo, Alexa and Appstore for Android, interviewed in Wired August 2017.
Competition is now well underway. Google Home was launched in 2016. In December 2017, Apple begins shipping its Siri-enabled HomePod. Microsoft is working with third parties to create Cortana-powered speakers. Google has opened its APIs to encourage developers to build apps for Google Home, concentrating especially on image recognition and translation.
Amazon’s head start is significant. The firm offers voice apps built by outside developers called Skills; over 12,000 are available as of August 2017. These include skills such as turning on your lights, calling an Uber, teaching you to speak Chinese, or buy something and deliver it.
Amazon forms internal development teams to focus on specific Alexa features. “We have thousands of people working on Alexa across different domains and at the foundational science level,” George told Wired. “Name the domain, name the type of interaction, and we form these single-threaded teams to go after them.”
The success of AWS has laid a foundation for Alexa. “If you think of our lineage, close to 50 percent of Amazon’s global unit volume comes through the fact that we opened our platform to third-party merchants,” George said. “With AWS, we built primitive computing services in the beginning, where software developers were our primary customers. It benefited our ability to move faster, so we have this history of openness. That carried forward into the way we thought about Alexa.”
Voice Services can allow Alexa to be built into everything from washing machines and air purifiers to baby monitors and toothbrushes. With the Alexa Fund, a $100 million venture capital effort, Amazon is funding startups. “The world is going to solve problems that we hadn’t even thought of,” said George.
One person put Echo Dots on the ceiling for his disabled brother, making contact with his family easier. Parkinson’s patients can use Alexa to practice their speech.
Amazon introduced the Echo Show, Echo with a screen, in April 2017. It can be used to make hands-free calls or video calls, and it connects to everything Alexa. Google announced soon after Show was revealed that its You Tube videos would not be available for it. That would be ceding too much territory to a competitor in the battle for smart home device share. Amazon dropped the price $30 to $200. A smaller Echo Spot, scheduled for December 2017 release, is expected to be priced at $130, according to an account in Engadget.
Alexa Getting More Human
Research is always continuing at Amazon. A group of behavioral scientists and engineers is working on refining Alexa’s personality, an effort led by Toni Reed, VP of Alexa experience and Echo devices. She leads an effort to try to understand how analytics can be used to improve Alexa from being able to conduct short conversations, to taking on major features of individual personality.
The Alexa Prize competition aims to accelerate this work. Launched in September 2016, the competition challenges university students to create a social bot that can hold a conversation for 20 minutes. Fourteen teams compete for prize money; the winning team gets $500,000 and its university gets $1 million if the bot can converse coherently for 20 minutes. This means the bot has to respond to emotional cues, as well as demonstrate word knowledge and perspective.
Head scientist Prasad said to Wired, “How do you respond to the non-verbal cues? That to me is the ultimate AI. That’s the next step.”
An improvement was made in April to bleep out profanity. Customers need to trust Alexa for it to pass itself off as more human.
Echo Look gets cameras from Google into the home. Launched in April, Echo Look is a new device with a camera and microphone intended to be in your bathroom, bedroom or closet — private spaces.
One intended use according to Amazon, is to help the customer choose the right outfit for the day. Alexa can take a photo (or video) and Style Check will tell you how well your outfit “works.” Eventually no doubt, if not already, the customer — home dweller — would be able to buy a replacement for whatever is not working in the outfit.
Privacy concerns around Alexa will be ever-present. At the same time, there’s no stopping it. Alexa is installed in all 4,748 rooms in the Wynn Las Vegas Hotel, as Wired reported. Machine learning is making it smarter over time.
- Written and compiled by John P. Desmond, AI Trends Editor