Practice Ethical AI, Dr. Shirley Jackson, President of Rensselaer Polytechnic Institute, Challenges Business Leaders

1994
Dr. Shirley Jackson, President, Rensselaer Polytechnic University

At an event on the Ethics of AI presented by the New England Executive Council of Rensselaer Polytechnic Institute, Dr. Shirley A. Jackson, president of the university, challenged today’s executive leadership to do the right thing.

“AI is on a knife’s edge. It’s up to us to decide how it will be used,” Dr. Jackson said.

AI is advancing rapidly and is expected to have a mighty economic benefit. Dr. Jackson quoted from a McKinsey Global Institute study that estimated AI will add an estimated $13 trillion by 2030 to global economic output. That study compared the impact of AI with the introduction of steam engines in the 1800s and robots in the 1990s.

“In addition to economic benefit, AI will benefit humanity,” such as by having widespread impact in medicine and healthcare by detecting and monitoring disorders, and even helping mental health patients, Dr. Jackson said. “And we’re taking stock of the darker possibilities,” she cautioned.

She mentioned the news feeds of Facebook and other social media outlets spreading disinformation leading up to the 2016 US elections; reports that China is building a social ranking system to apply to its citizens; corporate surveillance; erosion of data privacy; and human bias in data used to train AI systems.

She also mentioned questions surrounding the two recent crashes of Boeing MAX 737, which it appeared the human pilots may have lost control of the planes to a faulty guidance system with AI at its core, resulting in crashes with no survivors. “Those crashes are still being investigated today, but it causes us to question what are the limits of AI software systems today?” Dr. Jackson queried.

RPI Contributions to AI

RPI graduates have made contributions in AI. Curtis Priem is a co-founder of NVIDIA, maker of graphic processing unit (GPU) chips that have helped to power the rise of AI. Those chips were originally targeted to the gaming industry, which RPI has played a role in as well; RPI currently offers a Games and Simulation Arts and Sciences program, with faculty including Dr. Maurice Suckling, with over 20 years of games industry experience as a writer, designer and producer.

The panel addressing the ethics of AI held recently in Boston included, in addition to Dr. Jackson as moderator, and all RPI graduates:

– Paul Bleicher, CEO of Optum Labs, concentrating on healthcare, who has an extensive background with the biotech and pharmaceutical industries, as well as teaching experience at Mass General Hospital and Harvard Medical School;

– Dawn Fitzgerald, head of digital transformation for Schneider-Electric Data Center Operations, where she helps lead the integration of AI technology into critical facilities environments; she has worked at IBM and Motorola; she has graduate degrees from MIT, electrical engineering and an MBA from the MIT Sloan School;

– John E. Kelly III, senior VP, Cognitive Solutions and Research with IBM, where he focuses on IBM’s investments in IBM Watson and Cloud Platform, IBM Watson Health and IBM Security; he previously served as a director of IBM Research for seven years;

– Kathryn I. Murtagh, managing director and chief compliance officer of the Harvard Management Co., Inc., which manages the Harvard University endowment and financial assets; she is responsible for regulatory and legal matters relating to HMC’s investment activity, and also for implement HMC’s sustaining investing program; she was previously a law partner at Goodwin Procter LLP of Boston.

  • And Brian Stevens, vice president and CTO at Google Cloud.

Humans Need to Maintain Control

Dr. Jackson asked Dawn Fitzgerald how AI can help to secure for example, a petrochemical plant. Referring to the recent Boeing 737 MAX crashes, Fitzgerald said, “We don’t know if it was a complex AI system that failed, but we know the humans could not take control.”

In her design work at Schneider, Fitzgerald wants to ensure humans can maintain control. “We need to have humans controlling AI to have ethical AI systems,” she said. “And we need to actually design ethical AI.”

Dr. Jackson asked if AI systems need to be designed to allow humans to take back control if necessary. “Absolutely,” said Fitzgerald. She also emphasized the importance of education and training of the workforce using the AI systems.

“A data scientist might understand it, and the technician on the floor [using it] also needs to understand it,” she said.

Dr. Jackson asked if AI should not be used in certain fields because of the “black box problem,” the inability of the AI system to explain how it came to its prediction, recommendation or conclusion.

John Kelly III of IBM

John Kelly of IBM acknowledged the black box problem. Referring to neural networks, he said, “When these things becomes hundreds or thousands of layers deep, it becomes absolutely black box.” He added, “Our view is, we need to be able to explain it.”

The ability for the system to explain how it reached its conclusion must be built in from the beginning, he said, adding, “ You can’t bolt on ethics at the end.” He said IBM is working on it.

Many advocate keeping humans in the loop as AI systems do their work. Kelly of IBM commented, “What do we do when the machine is right and the human is wrong?”

Dr. Jackson asked Kathryn Murtagh whether unexplainable AI could be used against a defendant in a criminal case. Murtagh said, “It would be a violation of civil rights and due process.” The use of AI in sentencing, by looking back at history to predict a rate of recidivism, “is being looked at closely,” Murtagh said. This is because the fear is, these systems could disproportionately target low-income and minority communities by perpetuating embedded biases in the historical data.

Kathryn Murtagh of Harvard Management Group

Reidentification From Anonymous Data Can Outrun HIPAA

Dr. Jackson noted to Paul Bleicher that Optum has data on some 200 million people, and asked if “data-driven medicine” is helping people. (Note: the estimated 2018 US population is 327 million.) She also asked if people could prevent the machines from learning things about people they would rather be found out. (Relevant quote from Supreme Court Justice Brandeis: “One of our most cherished of all rights is the right to be left alone.”)

“The data is very powerful,” Bleicher said, noting that new sources of data are entering the system. “There is a lot of nuance in doctor’s notes,” for instance, he said. “We can now embed deep learning to bring together all this information and combine it with text analysis, to create a multi-dimensional source of information that you can build into models. That gives you insight into many things.”

“It might detect things the patient is not aware of,” he said. “We need to be careful about who we share the data with and how it is used and not used.”

Paul Bleicher, CEO, OptumLabs

Dr. Jackson asked if HIPPA privacy law is holding back innovation in healthcare. Bleicher  mentioned the term “reidentification,” which refers to ways to identify people from sources of data thought to be anonymous. He said, “HIPAA does not make it impossible to reidentify people but it makes it more difficult.”

Data is at the core of AI work in healthcare, and Optum has the data. Bleicher advised, “If people are frustrated by lack of access to data, they need to partner with people who do have access to the data.”

John Kelly of IBM chimed in, “This is at the heart of the ethical questions. Every time we look at data, we learn something. We must maintain the privacy of that data. This problem will get more complex very rapidly.”

Technology advances will be producing more data. For example, a small sensor in a person’s fingernail can be used to the track the progress of neurological disease, he said.

Dr. Jackson asked Murtagh of the Harvard fund what an ethical investment in AI looks like. As an example, Murtagh said the semiconductor industry uses a lot of water; she would want to look at how the water is managed, and whether the wastewater is disposed of properly. “The industrial use of water might conflict with the public need for water. We look at that,” she said.

Dr. Jackson asked Brian Stevens of Google whether the image recognition aspects of AI are troubling to him. Google started as a company before AI began to be widely used, he said. About five years ago, Google became more committed to deep learning and machine learning. (In January 2014, Google acquired DeepMind for $500 million.) Language translation was an early application; machine learning helped to improve the quality of the results.

Brian Stevens of Google Cloud

“It’s very much academic-oriented research,” he said of AI. As Google developed tools, including TensorFlow, it decided to make them freely available. Value-added services began to be offered to clients through Google Cloud. Financial services and the health industry were early adopters.

Quality of data is a challenge; confidence in a result can range from 20 percent to 95 percent depending on the data quality.  AI developers are working on learning from sparse data as well, learning from less than 1,000 data points for example.

AI Experts Need to Tune Into Ordinary People

Dr. Jackson is concerned that the discussion around AI and ethics be inclusive. “When the experts are just talking to each other, they might not be in touch with how everyone else lives,” she said.

Stevens of Google said a small leadership team at Google has three offsite meetings a year where ethics, culture and diversity are high on the agenda.  He noted that Google last year released a set of guiding principles around the use of AI by the company. Still, extremist groups constantly try to defeat Google’s protection algorithms for Youtube in their efforts to post videos.  A protection algorithm might be effective on day one, then things change. “It’s a war out there,” Stevens said. “They learn, they modify, they come back.”

AI systems do not fit well into established software development life cycles, because the systems adapt and change on their own to some degree, Kelly of IBM commented. “It’s very hard to test the limits of an AI system and predict when it will fail,” for example, he said.

Confirmation bias, the tendency to interpret new information as confirmation of one’s existing beliefs, is becoming more of an issue as recommendation engines feedback to users information they prefer to see. When applied to news feeds, “It’s very dangerous,” said Bleicher of Optum. “It’s a major societal ethical issue.”

As AI becomes more capable of performing routine cognitive tasks, Dr. Jackson asked about the risk of AI replacing human workers. Fitzgerald of Schneider said, “I don’t think AI will replace all humans. But the humans who know how to use AI may replace the humans who do not.”

In medicine, Bleicher said, “Doctors will not go away. But AI will create a major upheaval in the medical industry.”  He said medical care cannot continue to consume 18% of the gross domestic product annually; the pressure to reduce costs is substantial.

“We will have doctors and nurses assisted by AI; it will realign the design of medicine,” he said.

Studying the Ethics of AI on the Agenda for Engineering Students

Dr. Jackson asked the group what universities should be teaching students about the ethics of AI. Stevens of Google said he sees it as very personal. With the release of the guiding principles for AI development “helped people establish their own compass” at Google.

Fitzgerald of Schneider, who has a son enrolled at Rensselaer currently, said he is taking an ethics class and talking about it. “It’s having an impact. Ethics needs to be part of the design process. To design for ethical AI is our responsibility,” she said.

Dawn Fitzgerald, head of digital transformation for Schneider-Electric Data Center Operations

Dr. Jackson asked the group if they anticipate that the government will begin to regulate AI in some way. Kelly of IBM said, “I think they will. They have started down that path. There’s a lot of work to be done. They mean well. But you cannot really regulate an industry that is advancing exponentially. Regulation to me is not the answer – policy should be set. Regulation will be too little, too late.”

He later asked a rhetorical question: how do we sleep at night building these systems, then he said, “The answer is, these systems can be used for good or bad things. But I feel that if I can help a physician or help to solve climate change, then I need to build the system.”

Dr. Jackson noted that the brain is not fully developed until age 25. So the question of what role AI has in behavioral development is real. “De facto we are doing the experiment as we use these technologies,” she said.

The special panel discussion was sponsored by the Artificial Intelligence Center of Excellence in Troy, NY. Bob Bedard, founder of the center, the CEO of deFacto Global and a graduate of Rensselaer, said, “This is an important examination of the effect of AI and Machine Learning from an ethical perspective on technology, business, even everyday life. The establishment of the AI Center of Excellence is a major initiative focused on creating an AI ecosystem designed to develop literacy, competency, job creation and economic growth throughout the Capital District and the State of New York.”

The center will promote the commercialization of AI R&D, generate opportunities for startups and provide skill development programs for students. “The AI Center of Excellence will also support policy makers with their research on ethics concerned with creation and adoption of new AI technologies are driving forces of the endeavor,” Bedard said.

For more information, go to AI Center of Excellence.

— By John P. Desmond, AI Trends Editor