Human Touch Keeps AI From Getting Out of Touch 

5398
Humans need to stay in the loop because on its own, AI may not be so smart and its systems can lead to many unintended consequences. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

AI is charting new ways to become out of touch, potentially.  

Maybe the frame of mind around agile, sometimes spontaneous, software development that had been going on in decentralized organizations before AI took over, is coming into conflict with the mindset needed to feed AI systems with a constant high-volume flow of clean, well-structured data.   

Sylvain Duranton, senior partner at Boston Consulting Group

This suggestion was broached by Sylvain Duranton, senior partner at Boston Consulting Group, in a recent TED Talk. “For the last 10 years, many companies have been trying to become less bureaucratic, to have fewer central rules and procedures, more autonomy for their local teams to be more agile. And now they are pushing artificial intelligence, AI, unaware that cool technology might make them more bureaucratic than ever,” he stated in a recent account in Forbes  

At BCG, Duranton leads a team of 800 AI specialists who have deployed over 100 custom AI solutions for large companies around the world.  “I see too many corporate executives behaving like bureaucrats from the past. They want to take costly, old-fashioned humans out of the loop and rely only upon AI to take decisions,” he stated.  

He coined a term for it: “algocracy”with the AI in control. He sees that AI operates like bureaucracy.   

“The essence of bureaucracy is to favor rules and procedures over human judgment. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy — I call it ‘algocracy,’ where AI will take more and more critical decisions by the rules outside of any human control,” Duranton stated. 

He favors a view of AI as “augmented intelligence” with the humans running the show and not the AI. A result of bureaucratic algocracy could be for example, a new plane from a world-class aircraft manufacturer crashing, killing everyone on board. Hopefully that is absolutely the worst-case scenario of AI run amok. 

In a  survey of 305 executives conducted by Forbes Insights in 2018, only 16% indicated they had full trust in AI making low-level decisions. These included flagging errors, sending notifications, accepting payments and managing system performance. Only 6% had full trust in mid-level decisions such as helping customers with problems, and serving as intelligent agents to employees. However, a separate survey taken at the same time found that only 37% had a process in place to augment or override results if their AI systems did not perform well.  

Duranton suggested a decision-making process of “Human plus AI.” The mix of time commitment should be 10% into coding algorithms, 20% to build technology around the algorithm, collecting data, building user interfaces, integrations into legacy systems. 

“But 70%, the bulk of the effort, is about weaving together AI with people and processes to maximize real outcome,” he stated. “The first step is to make sure that algos are coded by data scientists and domain experts together. Solve the most difficult problems together.” 

Commensurate with this idea of keeping humans in the loop, try a dose of healthy skepticism about the exaggerated claims of AI, especially around the COVID-19 pandemic, suggests a recent report from Brookings. It offers some suggestions: 

Look to the subject matter experts  

“AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve,” stated author Alex Engler, a Rubenstein Fellow of Governance Studies at Brookings, who also teaches classes on large-scale data science and visualization at Georgetown’s McCourt School of Public Policy.  

For predicting the spread of COVID-19, look to epidemiologists, who have been using statistical models to examine pandemics for a long time. Mathematical models of smallpox mortality date back to 1766; modern mathematical epidemiology started in the early 1900s. “The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have,” Engler stated, adding, “There is no value in AI without subject-matter expertise.”  

Plan for unintended consequences  

Efforts to use AI to track the spread of COVID-19 have led to conflicts between surveillance technology and the right to privacy. In South Korea, neighbors of confirmed COVID-19 patients were given details of that person’s travel and commute history. Taiwan used cell phone data to monitor individuals assigned to stay in their homes; Italy and Israel are moving in that direction.  

Of “exceptional concern” also is deployed social control technology in China.  

“Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem,” Engler states. “The incentives that markets create can also lead to long-term undermining of privacy.” Among companies trying to sell mass-scale surveillance tools to the federal government are Palantir and Clearview AI, which scraped the web to make an enormous database of faces, without permission of the subjects.   

“If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.” Engler suggests. 

He is somewhat optimistic about AI, impressed on its impact in medical imaging to evaluate the malignancy of tissue abnormalities and reduce the need for invasive biopsies. Also, AI-designed drugs are now starting human trials, and the use of AI to summarize thousands of research papers may quicken medical discoveries relevant to COVID-19.  

“AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations,” Engler states.  

Maybe AI is Not So Smart  

Another thinker suggests AI may not be so smart.  

Jonathan Tennenbaum, researcher and consultant on economics, science and technology, based in Berlin

Jonathan Tennenbaum is a researcher and consultant on economics, science and technology, based in Berlin. He is an International Collaborator at the Center for the Philosophy of Sciences at Lisbon University. He suggests in a series of recent articles in Asia Times that investigations into the weaknesses of current AI leads to the “stupidity problem.”   

The current trend of using the field of neurobiology to chart a path for AI might be misguided, he suggests. “However successfuland even indispensable in many practical spheres todaythe dominant approaches to artificial intelligence remain rooted in false conceptions about the nature of the mind and of the brain as a biological organ,” Tennenbaum states.   

He adds, “On the level of biology and physics, the brain has virtually nothing in common with digital processing systems.” 

And, “It is remarkable that in their writings about the human brain, the pioneers of artificial intelligence, such as John von Neumann, Alan Turing, Marvin Minsky, John McCarthy and other pioneers of artificial intelligence, all failed to recognize the implications of the fact, that neurons in the brain are living cells.” 

Mathieu Moneyron, a student at Polytech Sorbonne Paris, and an intern at Smile Open Source Solutions

Food for thought, certainly. Similar sentiments were suggested by a student writing recently in Medium on trying to understand why AI is stupid.  

I’m a French engineering student and I’m currently attending a course on Artificial Intelligence, deep learning, neural nets and other machine learning techniques. I’m not particularly a huge fan of AI, but I think it can still be useful,” stated Mathieu Moneyron, a student at Polytech Sorbonne in France and an intern at Smile Open Source Solutions outside Paris.  

He is not sure the term “artificial intelligence” is appropriate. “Non-specialists may be mistaken by this term. Technology enthusiasts think this is magic, AI will radically remodel our world, AI will solve all the problems in the world, AI will remove poverty and inequalities, AI will remove hunger, AI is the future. On the other side, some people think AI will take their jobs, AI will spy on me,” he stated, adding, “I think everyone is wrong.” 

He refers to Luc Julia, a French engineer currently working at Samsung, who was involved in the development of Apple’s Siri. “He claims that when this research field was created, scientists made a big mistake by calling it Artificial Intelligence. He suggests to use the term Augmented Intelligence instead. Our human intelligence can be augmented thanks to the machine and algorithms running on it.” 

Read the source articles in Forbes, at Brookingsin Asia Times and in Medium.