By Deborah Borfitz, Senior Science Writer, AI Trends
Digital assistants have become a major trend in government at every level and across geographies, and could soon be a mainstay in many state and federal agencies in the U.S. Recent favorable signs include an executive order launching the American AI Initiative and the Health and Human Services Department awarding 57 spots on its Intelligent Automation/Artificial Intelligence (AI) contract, according to natural language processing (NLP) expert William Meisel, president of TMA Associates.
Speaking at the AI World Government conference, held last month in Washington, D.C., Meisel says digital assistants (aka “intelligent” or “virtual” assistants) are among the most developed and least risky ways to implement AI—and “the closest to what we see in sci-fi.” Digital assistants are broadly applicable across departments and agencies looking to cut costs and boost human productivity and have a minimum probability of failure and unintended consequences. For a citizenry looking for answers, they’re also a “nice alternative to automated systems and long hold times,” he adds.
Juniper Research reports that, by 2023, one-quarter of the populace will be using digital voice assistants daily, says Meisel. By the end of this year, the global install base for smart speakers is projected to exceed 200 million units and they’re doing as well as humans in understanding speech, he adds.
NLP is the core technology, matching user text input to executable commands. Digital assistants that recognize the voice typically first convert speech to text, meaning speech recognition can be tacked onto a text-only (chatbot) solution, Meisel says. Either way, the technology generates a lot of data that can be used to personalize conversations and fix flaws in websites.
Among the smorgasbord of intelligent assistants in the public sector are: Emma, used by U.S. Citizenship and Immigration Services (ICE) to help website visitors get answers and find information in English or Spanish; and Mrs. Landingham, a chatbot of the U.S. General Services Administration that works with the Slack app and guides new-hire onboarding, says Meisel.
In the UK, the National Health Service has a digital assistant to help residents determine if their medical condition warrants a trip to the emergency room, he continues. The Medical Concierge Group in Uganda has built a digital assistant to advise people on their treatment options and when to see a doctor. And a chatbot based on the Haptik platform is allowing officials in Maharashtra, India provide conversational access to information on 1,400 public services managed by the state government.
Virtual assistants that give health advice over the phone are expected to be major players as the United Nations works to meet its 2030 Sustainable Development Goals, says Meisel. The MAXIMUS Intelligent Assistant has already enhanced the customer service experience of government citizenries around the globe.
In Mississippi, the MISSI chatbot assists with public services and suggests good places to visit, says Meisel. The City of Los Angeles is particularly fond of bots. Residents can turn to CHIP (City Hall Internet Personality), based on Microsoft’s Azure bot framework, if they need help filling out forms. They can also opt to receive local daily news and information via Amazon’s Alexa. Up in San Francisco, PAIGE—built atop Facebook’s wit.ai NLP engine—is assisting workers with questions about the city’s confusing procurement process, Meisel says.
OpenData KC, the Facebook messenger chatbot used by Kansas City’s open data portal, has enabled users to quickly find relevant information and datasets on a crowded online website, says Meisel. In the Carolinas, the not-for-profit hospital network Atrium Health has a HIPAA-compliant, Alexa-based digital assistant people are using to reserve a spot at one of the system’s 31 urgent care locations.
By the Numbers
Commercial applications of digital assistants are likewise varied and widespread, he says. Last year, Bank of America launched a mobile app called Erica that customers can use to check their account balance and make transactions. Erica is gaining new users at the rate of half a million per month and has doubled (to 4,000) the ways in which clients can ask her questions. Telecommunications conglomerate Vodafone Group has TOBi to handle customer transactions from start to finish and plans to increase the number of contacts reached by chatbot six-fold over the next few years.
Adobe Analytics finds 91% of 400 surveyed business decision-makers have made significant investments in voice interaction and 99% are increasing those investments, says Meisel. Close to a quarter of companies have already released a voice app while 44% plan to do so this year, he adds. Most of those apps are for defined channels.
Failures are common when companies and governments try to build a specific AI tool, says Meisel, but there is no shortage of companies standing by to help—including TMA Associates as well as Nuance Communications, Verint, and Microsoft.
The biggest challenge with digital assistants, he adds, is that “you don’t know what people are going to say when call in. You will always have customers say something you don’t expect.” The solution is to deploy slowly, using the technology to augment the human system while you learn what you don’t know—precisely what Amazon did with Alexa.
Privacy By Design
Detecting and resolving misunderstandings between humans and machines is the specialty of Ian Beaver, lead research engineer at Verint, helping to ensure intelligent virtual assistants deliver tangible productivity gains. Interactive voice response technology and website FAQs are not enough for government agencies where “funding is pulled in multiple directions and customer service is typically not high on the priority list,” he says.
Digital assistants can also better accommodate fluctuations and surges in user demand, says Beaver. They can deal with unforeseen uses cases, circumstantial information, changing user demographics and requirements, and focus on where improvements are most needed.
In the public sector, agencies have a captive audience because people have no choice in their service provider, says Beaver, and “people don’t trust what they do not choose.” Users are more willing to provide identifiable data if they know there are guardrails around how it can be used—think privacy protection laws like GDPR and CCPA. Virtual assistants that offer “privacy by design” can likewise give users a greater sense of freedom to talk about sensitive topics with perceived repercussions, he adds.
Beaver presented the U.S. Army’s SGT STAR virtual recruiter to demonstrate his points. The chatbot went live in 2006, integrated with Facebook four years later and hit app stores in 2012, he says. It understands about 1,100 distinct user intentions, talks to 900 unique people a day and took over the work of 55 cyber-recruiters in the Army’s live chat facility. “After a while, only 3% of conversations hit a live human,” he says.
People talk to SGT STAR longer than they would a human recruiter, and all that data quickly painted a portrait of users, says Beaver. They’re influenced by movies, don’t want to waste recruiters’ time, and family members care about what will happen to their loved one. They will ask hard questions of the digital assistant before talking to a human recruiter, “like a test run.” Users also have a lot of practical questions, such as “How do I pay my bills when I’m deployed?” and “Am I going to have to cut my hair?”
The information was used to redesign the Army’s website, which now includes answers to 400 common questions, says Beaver. Unexpected uses of SGT STAR were also discovered—notably, to disclose embarrassing, illegal and other personal issues that could affect enrollment or Army life. “We went into the healthcare space because of how open people were.”
Veterans and active duty personnel were both looking for resources on post-traumatic stress disorder, insomnia, and other service-related mental health issues, he continues. Enlisted soldiers did not want to risk being judged or discharged by going to their supervisor for answers.
A second case study looked at the now-retired SpectreScout, a chatbot built to scan all internet relay channels on behalf of the resource-limited cyber-crimes division of U.S. Immigration and Customs Enforcement (ICE). “We predicted the channels where the bad boys would hang out and pretended to be looking for goods [e.g., a child to exploit],” says Beaver. “We’d spin up a bunch of users and have conversations with our self. When humans joined in, we’d arrange a meeting with a suspect and an ICE agent would take over.” It was so successful at generating leads that it finally had to be shut down; ICE agents couldn’t keep up.
So Verint built Emma to answer questions, says Beaver, including sensitive inquiries about how to enter the U.S. as a refugee or to apply for asylum. The virtual assistant launched in 2015 to handle large swings in call volume triggered by mere mentions of policy changes. Emma succeeded in reducing those call spikes, and without the need to retrain a bunch of ICE representatives, he notes. In fact, Emma became a go-to resource for immigration attorneys on policy matters.
In healthcare, one role of digital assistants is to create a “frictionless experience” between doctors and patients, says Eduardo Olvera, director of user experience at Nuance Communications. The company has an Alexa-like virtual assistant that extracts insights from routine exam-room dialogue and automatically uploads the information to the right spots in the electronic health record—meaning, doctors can be fully present with patients and not focused on a computer screen. In call centers, intelligent assistants can likewise be fed transcripts of phone conversations and return recommendations for improving citizen engagement, he adds.
The new Pathfinder project of Nuance Communications is using machine learning and AI innovation to increase the conversational intelligence of virtual assistants and chatbots, says Olvera. Project Pathfinder reads existing chat logs and transcripts of conversations and uses the data to build effective dialog models by adding in missing pieces of information and modifying the flow. The company just came up with another, yet-to-be-named conversational AI tool that will manage the work of testing for biases in the data, he says.
Systems get smarter, and interactions improve, when there is “common ground,” Olvera says. “We went from ‘please tell me in my own words’ to ‘please tell me in your own words,’ but that’s still not very grounded. What we want to see is ‘I already know. Here, it’s done.’”
If-then rules only work about 35% of the time, Olvera says, and using machine learning to populate a record with everything known about a customer gets close to 80% accuracy in predicting their need. Additional AI innovation can take governments to the 90% mark that is truly transformative.