Executive Interview: John C. Havens, Executive Director, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

2227

IEEE Publishes First Edition of Ethically Aligned Design

John C. Havens is Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS). The group recently released Ethically Aligned Design, a landmark resource on A/IS issues created by over five hundred people over the last three years (EAD). He recently took some time to speak with AI Trends Editor John P. Desmond.

AI Trends: IEEE is a leading consensus-building organization that nurtures, develops, and advances global technologies for the benefit of humanity. Why is the time now right for the organization to issue the first edition of Ethically Aligned Design?

John C. Havens, Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS)

John C. Havens: When I started to get involved in 2015, IEEE had been researching applied ethical considerations. Applied ethics is a design methodology focused on asking a lot more questions from different perspectives than more traditional engineering paradigms, like waterfall methodology. The Managing Director of The IEEE Standards Association, Konstantinos Karachalios, was helping lead this effort within the organization when I came with an idea that was an early version of Ethically Aligned Design. At the time, nothing like this had existed.

Autonomous and intelligent systems affect human identity, data and agency very differently than other technologies. Our goal is to prioritize applied ethics questions or what are called values-driven design methodologies needed at the beginning of any manufacturing process to examine more end user values more thoroughly than by just assessing risk or harm. The short answer is the time was right to address how these amazing technologies will affect humans so that we can make sure that these technologies serve human values.

Ethically Aligned Design is a guide for policymakers, engineers, designers, developers and corporations. How do you go about ensuring that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems aligns with human values?

The IEEE Global Initiative, that’s the shorthand for the longer name, is the program within the IEEE Standards Association where the volunteers—some 400 or 500 of them—created Ethically Aligned Design, First Edition. Because of its length, about 280 pages, we called it a book. It was developed over the last three years using an iterative process designed very much on purpose. We previously published two versions in 2016 and 2017, and released both as request for inputs to get as much feedback as possible.

How different people align with human values is a big question. Among relevant chapters in the book is one on embedding values into autonomous and intelligent systems. It’s written by global experts focusing on the actual aspects of how to instantiate values into a system, whether it’s algorithmically focused (where you can’t see it) or as a robot (where the shell may look like a human or cute toy).

The chapter provides an example of a robotic device with machine learning inside of it within a hospital. One critical point to note is that robots or devices themselves don’t have values. But when the device is put into a human situation based on the context of the country and location where it’s inserted, all the people’s values around it are affected. For example, in a hospital setting, the set of stakeholders includes doctors, nurses, patients, and patients’ families. Even those four sets of stakeholders have very different values in the midst of someone being in the hospital.

For the doctor, the focus on working with a machine is about sorting through its data to deliver a diagnosis. The nurse is typically focused on palliative care and how they should provide the kind of bedside manner that the patients may need. For the patients, the family is a huge consideration, and patient privacy needs to be respected. For example, if a doctor is talking to the robotic device in the presence of the patients, they don’t want to say things that might upset the family.

Anyone creating A/IS needs to be thinking about how those four sets of stakeholders interact.

Are all the major technology-producing countries of the world participating in the drafting of the EAD?

After we released the first edition of Ethically Aligned Design in December of 2016, we got about 200 pages of feedback, much of it from people in China, Japan and South Korea saying, “Congratulations. We think the document has a lot of merit, but it feels very Western in nature.” So we reached out to a great number of those people. We have about 50 members now from those areas providing feedback. The main representation is from the US, Canada, and the European Union to make up the 400 or 500 contributors.

We also have members from India, South America, and Africa, but in very small numbers. A big goal for us moving forward is to get more voices from those countries. We have quite a few contributors from Australia and New Zealand. We’re trying to get some participation from Iceland, because there is some great work going on up there.

I am proud of the volunteers who wrote the chapter on classical ethics in autonomous and intelligent systems. That is an excellent chapter to read. It’s a great way to have an immediate perspective especially of Western and Eastern ethical traditions, and how that perspective affects all of design. For the second edition of EAD, we want to get deeper and even more diverse.

What is the schedule for the second edition?

A goal would be to launch EAD2e in the first quarter of 2020. Right now, we’re trying to think about how to expand the EAD1e with smaller reports that point back to the large document.

What does the EAD suggest for the pragmatic treatment of data management and data privacy?

We have a chapter on personal data and individual agency, that talks about the EU General Data Protection Regulation, what California is doing with the Data Privacy Bill, which is extremely strong, and many other references. We talk about a lot of other global data regulations, including work in Estonia, Finland and India’s Aadhaar work.

[Editor’s Note: Aadhaar is a 12-digit unique identify number that can be obtained by residents of India, based on their biometric and demographic data. The data is collected by the Unique Identification Authority of India (UIDAI), a statutory authority established in 2009 by the government of India. See more at Wikipedia.]

The EU’s GDPR to protect the user’s data is the global standard for data privacy. We also need to have an approach of privacy by design, putting privacy at the core of our work, even before you have a blueprint, you need to ask the questions both the GDPR and PBD focus on.

Data agency, however, is about recognizing that we need to have a “yes and” approach to data privacy which focuses on agency.  This means putting people at the center of their data to have the ability to access and control their data.

To help do this you can use what’s called smart contracts, which is sort of a blockchain methodology. There are a number of other tools around this, and an entire ecosystem exists to provide the methodologies and technologies to make this a reality. In short, users only to exchange a certain amount of peer-to-peer data through very encrypted safe means at all times.  This needs to become the new norm for all data exchange.

A “consent” model [such as current standard Google or Facebook user permissions] can be extremely problematic because people don’t know why they’re clicking or what they’re clicking.

In contrast, data agency means people are being trained [in providing permission for how their data is to be used]. You’ll still be tracked. You’ll have all the benefits of being tracked for advertising or what have you, but you’ll also be at the center of your data and be trained [in how to grant permission for use of that data]. They only need a certain amount of data for a certain amount of time in the context of what you’re doing. Medical data for your doctor, insurance data for your agent, and so on.

[Ed. Note: For more on data agency, see IEEE. ]

GDPR is helping but we have this wonderful opportunity to give people the same level of parity, by using the same tools used for advertising for years, to give them control at the center of their data.

What does the report suggest for public policy around AI and autonomous systems such as self-driving cars?

We talk about human rights a great deal in EAD1e, which is recognized in international law. We have precedents for making ethical decisions on top of recognizing human rights law first.

For example, the Ruggie Principles, [the state duty to protect against human right abuses by third parties, including businesses], adapt human rights law to the business context.

Another thing is that the technology needs to be human-centric. That often means a human-in-the-loop (HITL) mentality is used in the technology, which means there can always be some form of intervention in a system where humans maintain control. Is there a system that’s doing something where no one who created it even knows really what it’s doing anymore? That output has a black box nature to it.

HITL is not about denying the amazing opportunities that machine learning and other technologies provide, such as crunching thousands of numbers that humans cannot. But if the humans are not in the loop and lose authority over outputs of the machines, this becomes problematic in regards to transparency, accountability, and explainability for all A/IS.

Among the general principles in the report is competence, suggesting creators of A/IS systems need to specify the skills and knowledge required for safe operation of a system. What guidance do you have for keeping humans in the loop so that the A/IS system does not go out of control?

With an autonomous vehicle, it might be easy to forget for someone creating the technology that the users in their context (drivers, passengers, operators) will need a lot of explanation about the nature of A/IS they may not be familiar with.

Research shows people start to trust an autonomous vehicle doing the driving for them even when they’ve been instructed to keep their hands on the wheel by an AV manufacturer. But it’s natural, driving on on long stretches of highways for example, that people might put their hands by their side versus keeping them on the AVs steering wheel. After two hours, they might even fall asleep as they’ve trusted the AV is now safe.  But precedents have shown even trained drivers get distracted which have led to human harm, and it’s simply poor design to not assume that people will trust an AV and assume it’s safe enough for them to ignore driving altogether.

Keeping humans in the loop is not just a technical sense of pushing a button to reset. It includes making someone comfortable with how they will use the system in a real way.

Human well-being and mental and physical health also need to be prioritized along with safety. It’s not that people would ever skip them overtly or with intent, but especially with autonomous intelligent systems, much more testing has to be done so the HITL actually means something.

Some have suggested that for big technology companies, to engage in discussions of ethics is good public relations. And hopefully, it has good intentions. As an engineering organization, is IEEE able to specify how ethical AI systems get built, or do we need to trust that the developers of AI systems have improving human well-being as a primary success criterion?

In terms of all my answers in this article I should clarify that these opinions are my own and don’t necessarily reflect the views or formal positions of IEEE or IEEE-SA.  What I do know is the mandate of The IEEE Standards Association is to create standards. Standards are a tool where you can specify very clearly how things should get built. And that’s the wonderful nature of a standard: the goal is to make sure that each step of building something is beyond explicit in the sense of knowing exactly how to build A/IS and all technology so there won’t be as many unintended consequences designers didn’t plan for before launching their products.

That said, in the context of ethical considerations, it really depends on the context of how things are applied. It’s not really a question about whether we trust that developers of AI systems have avoiding risk or harm as a primary success criterion. Engineers, more than any other people I have met in my life except for doctors, have protecting human life and keeping people safe as the ultimate goal of what they do.

Most people know what it means for engineers to have that level of rigor, so that they are not nervous when riding in an elevator, for instance. However, when there are new things to measure like human emotion, engineers may not have the training. They would need to the skill sets of an anthropologist or a therapist on their team, so that they have a broader perspective, and are not held responsible for things that wouldn’t be their expertise.

The IEEE P7000 series of Standards Working Groups were inspired by the efforts of the volunteers who created Ethically Aligned Design.  One of these Standards Projects is focused on well-being metrics (“projects” because until a Working Group produces an approved standard, it’s in development and not actually a standard yet).  I am vice chair of that work. It is designed to teach not just engineers and data scientists, but anybody who doesn’t understand how well-being metrics are measured to prioritize them in design.

Many people  think well-being is just about mood, but it actually refers to objective and subjective metrics that provide  a broad basis for understanding mental and physical human health or what’s called long-term flourishing.

What should be the first steps organizations take to building EAD principles into their A/IS design efforts?

We created a short chapter at the front of EAD called From Principles to Practice. It’s got a lot of visuals; read that first. It’s designed to give you the fundamental principles of what we have created and why.

Secondly, for anyone who wants to get involved with the work, we’d love to have people join The Initiative. (People can sign up for our newsletter and express general interest in getting involved here). They can stay up to date with what we’re doing; there’s a lot of work to be done. We have fourteen standards working groups; anyone can join. They don’t have to pay; they don’t have to be an IEEE member. As we expand committees for the second edition, we will look to increase members from areas from around the world. So I would say subscribe to the newsletter as a call to action. And then read the chapter From Principles to Practice as a really good first step.

For more information or to get involved visit  ethics in action.ieee.or