Executive Interview: Mike Dukakis, Pursuing a Framework for the Ethical Development of AI Technology


The AI World Society Initiative under the auspices of the Michael Dukakis Institute for Leadership and Innovation is a group of academics, scientists, researchers and standard-setters who come together at Harvard University to bring new urgency to a discussion of AI and ethics. Dukakis, the former governor of Massachusetts, who was a US presidential candidate in 1988, was recently interviewed about the activities of the Institute by AI Trends.

Q. Under the auspices of your Institute for Leadership, the suggestion is that in this era of advances in Artificial Intelligence, the time is now for discussion about the ethics of AI. So, let me ask you first, what is the current situation that anticipates this need for this effort to talk about AI in ethics?

Michael Dukakis, former governor of Massachusetts, now pricipal of Michael Dukakis Institute for Leadership and Innovation

Michael: Well, this comes from an even broader interest that I have in seeing if we can make technology work for us and for a peaceful world and not for the opposite. Our concern at the Boston Global Forum and the Institute, is that every time we get some exciting, interesting, and potentially very valuable and constructive technology, somebody is at the same time trying to use it to feed conflict. We have seen repeatedly what could be wonderful advances in both technology and the quality of life get turned into weapons of war and weapons of conflict.

The point of both the Institute and the Forum is to see if we can’t encourage the development of norms, of ethical principles, that utilize this technology in ways that make this a better world, not a more violent one. That’s my principal concern, frankly. I’m not an engineer. I got a charitable D in physics at Swarthmore College, which had a lot to do with my going into politics rather than science. I have no regrets about that. I’m not a scientist. But I have a very strong interest in what these discoveries can do both in a positive way and a negative way. And I want to see more effort made internationally to create a framework within which we can develop these technologies in a very positive, constructive way. I am very concerned about what seems to me to be an emerging new Cold War, which I think is absolutely frightening and makes no sense at all.

I’m a huge believer in the UN the international organization that can develop some rules democratically and try to make sure that this kind of technology works for us not against us.

Q. What is the desired outcome or the goal? And are there milestones that could be tracked before you reach that outcome?

Michael: I’m not sure we can lay out a road map here. But I think we all ought to have enough collective intelligence to be able to determine whether or not, in fact, this kind of technology is working for the right reasons and for the right values. Our institute and other organizations are deeply into this and have a role to play. I want to see the international community assert itself in ways which I don’t think are happening today, to create this framework within which positive technology can, in fact, work to make this a better world.

As an international community, we have a lot to do. Technology has a way of running away with things when we ought to be trying to make sure that it’s being used for peaceful purposes, and the best purposes, and not feeding more and more conflict.

Q. Is there anything that led to an action being necessary at this time? What developments caused you to think the time to act is now?

Michael: I’m not an expert on this. But I think I’m being reasonably accurate in saying that the development of AI and similar kinds of technology is moving very rapidly. And we still haven’t got control of it. We don’t have that kind of set of rules and norms that govern what companies do with it or, for that matter, private actors, as well. Technology is racing ahead of the apparent collective ability of all of us to create the kind of ethical and constructive and peaceful framework that we’re looking for. We need to get a move on here. I know the UN is involved to some extent. But I don’t see the kind of strong and effective effort that I’d like to see in the international community. The United Nations or some appropriate agency of the United Nations needs to work to create this framework — and soon.

This isn’t the first time that technology is running ahead of our ability to make it work in the best and the most peaceful ways. And it’s not that we haven’t had some experience in this area. It’s important that we catch up and then get ahead of this. We’ve got some rules, we’ve got to have some standards, and we’ve got to have some kind of international machinery to be able to police this, which I think is essential.

Q. What do you see as the path toward achieving the goal? Is it through the United Nations or is it some other path?

Michael: It has to be through the United Nations or one of its constituent agencies. Maybe we’ve got to create some kind of constituent body within the UN, the framework of the UN, that focuses in on this, as we have in the case of nuclear technology. When we collectively put our minds to doing something about it, it isn’t perfect. But we’ve done a pretty good job of trying to deal with nuclear proliferation, for example, with a set of norms and international machinery, which has a lot of credibility.

It’s the kind of job that we have a right to expect of ourselves and of the UN and its various regulatory agencies. But it’s quite obvious that you can’t relax and step back from this because before you know it, there will be another ten technologies out there that have the potential for real good and the potential for real harm. And in this particular case, I don’t think the international community is catching up. The technology is still way ahead of our efforts to do something about it.

Q. How would you describe the current sentiment of the U.S. government on this topic and the urgency of it related to other initiatives of the U.S. government?

Michael: I may be missing something, but I don’t see the United States out there and in front on this set of issues. We have organizations, nonprofits, for example, based in the United States that are attempting to do something about it. But I haven’t heard any serious pronouncements from the federal government calling attention to the fact that is a serious problem. It’s an urgent problem. We’ve got to do something about it. I don’t see that. And I am not just being critical of the executive branch. I’m not hearing much about this from the Congress. And they’re still appropriating a lot of money to develop military uses for this kind of technology.

In a very bipartisan way, we have a lot to do, both here at home and in terms of what’s going on internationally. The atmosphere at a recent Asian meeting I attended was a very troubling scene to me. Here you’ve got both Asia and the United States with some very smart people who have a lot to do with developing this AI technology. And there was no discussion about that in any way. And at the conference, it said it was all about power, politics, and who was stronger and who was weaker, and who was ahead, who was behind, a good deal of which is really quite irrelevant, it seems to me.

I’m not interested in who’s ahead and who’s behind. I’m interested in an international community that can get control of this kind of technology and make it work for all of us, including, by the way, the less well-developed part of the world, which actually needs a lot more work than those of us in the so-called “more highly-developed places”. And what a great place this would have been to discuss this — given the technological prominence of both Asia and the Western Hemisphere, and the United States in particular. And I didn’t hear much. I didn’t hear much at all.

Q. What is the role of universities and academic research in achieving the outcome that you’re looking for?

Michael: Being deeply involved in technological development and at the same time being deeply involved in developing these ethical norms and standards that we want people to observe. And I think the academic community has responsibility for doing both. It’s just as simple as that. And they ought to be devoting themselves to both. After all, they’re developing the technology. It’s easy to develop the technology. Then you’ve got a responsibility to develop this framework that we’re talking about within which people will use it wisely, peacefully, and for the best possible goals.

Q And as you said, you see the United Nations as the most effective forum for representing all of the interests around AI in ethics?

Michael: It’s the United Nations or some specialized agency of United Nations, which does in this field what a number of other UN agencies are doing really quite successfully in other fields. But it’s going to be a constant problem because these technologies keep coming. And there are all kinds of great brain power out there developing them. And we need an equal amount of brain power to develop the kind of framework within which they can be used ethically, wisely, and for peaceful purposes.

Q. Thank you governor. Is there anything you would like to add?

Michael: As you know, we’ve got major plans coming up now for a number of meetings, conferences and gatherings on the subject, including a special program on AI and ethics that we producing in conjunction with AI World and AI Trends entitled AI in Government, June 23-25, 2019 in Washington D.C. We hope we can bring lots of people together and seriously begin to deal with some of the issues we’ve talked about today.

For more information, go to the Boston Global Forum and the Michael Dukakis Institute for Leadership and Innovation.