New AI Capabilities Enabling Greater Collaboration with Knowledge Workers

874
mage Credit: Shutterstock.com/GaudiLab

By Paul R. Daugherty and H. James Wilson, Accenture Research

New AI capabilities that can recognize context, concepts, and meaning are opening up surprising new pathways for collaboration between knowledge workers and machines. Experts can now provide more of their own input for training, quality control, and fine-tuning of AI outcomes. Machines can augment the expertise of their human collaborators and sometimes help create new experts. These systems, in more closely mimicking human intelligence, are proving to be more robust than the big data-driven systems that came before them. And they could profoundly affect the 48% of the US workforce that are knowledge workers—and the more than 230 million knowledge-worker roles globally. But to take full advantage of the possibilities of this smarter AI, companies will need to redesign knowledge-work processes and jobs.

Knowledge workers—people who reason, create, decide, and apply insight in non-routine cognitive processes—largely agree. Of more than 150 such experts drawn from a larger global survey on AI in the enterprise, almost 60% say their old job descriptions are rapidly becoming obsolete in light of their new collaborations with AI. Some 70% say they will need training and reskilling (and on-the-job-learning) due to the new requirements for working with AI. And 85% agree that C-suite executives must get involved in the overall effort of redesigning knowledge work roles and processes. As those executives embark on the job of reimagining how to better leverage knowledge work through AI, here are some principles they can apply:

Let human experts tell AI what they care about. Consider medical diagnosis, where AI is likely to become pervasive. Often, when AI offers a diagnosis the algorithm’s reasoning isn’t obvious to the doctor, who ultimately must offer an explanation to a patient—the black box problem. But now, Google Brain has developed a system that opens up the black box and provides a translator for humans. For instance, a doctor considering an AI diagnosis of cancer might want to know to what extent the model considered various factors she deems important—the patient’s age, whether the patient has previously had chemotherapy, and more.

The Google tool also allows medical experts to enter concepts in the system they deem important and to test their own hypotheses. So, for example, the expert might want to see if consideration of a factor that the system had not previously considered—like the condition of certain cells—changed the diagnosis. Says Been Kim, who is helping develop the system, “A lot of times in high-stakes applications, domain experts already have a list of concepts that they care about. We see this repeat over and over again in our medical applications at Google Brain. They don’t want to be given a set of concepts — they want to tell the model the concepts that they are interested in.”

Make models amenable to common sense. As cyber security concerns have mounted, organizations have increased the use of instruments to collect data at various points in their network to analyze threats. However, many of these data-driven techniques do not integrate data from multiple sources. Nor do they incorporate the common-sense knowledge of cyber security experts, who know the range and diverse motives of attackers, understand typical internal and external threats, and the degree of risk to the enterprise.

Researchers at the Alan Turing Institute, Britain’s national institute for data science and artificial intelligence, are trying to change that. Their approach uses a Bayesian model—a method of probabilistic analysis that captures the complex interdependence among risk factors and combines data with judgment. In cybersecurity for enterprise networks, those complex factors include the large number and types of devices on the network and the knowledge of the organization’s security experts about attackers, risk, and much else. While many AI-based cybersecurity systems incorporate human decision-making at the last minute, the Institute’s researchers are seeking ways to represent and incorporate expert knowledge throughout the system. For instance, security analysts’ expert understanding on the motivations and behaviors behind an IP theft attack—and how those may differ from, say, a denial-of-service attack—are explicitly programmed into the system from the start.  In the future, that human knowledge in combination with data sources from machines and networks will be used to train more effective cyber security defenses.

Paul R. Daugherty is Accenture’s chief technology and innovation officer. He is a coauthor, with H. James Wilson, of Human + Machine: Reimagining Work in the Age of AI (Harvard Business Review Press, 2018).

H. James Wilson is a managing director of Information Technology and Business Research at Accenture Research. Follow him on Twitter @hjameswilson

Read the source article in Harvard Business Review.