Executive Interview: David Bray and Bob Gourley, Technology Entrepreneurs and Thought Leaders

2018
Image Credit: pathdoc/Shutterstock
Risk Management of AI and Algorithms for Organizations, Societies and Our Digital Future

Two technology entrepreneurs and thought leaders, Bob Gourley and Dr. David Bray, recently spoke with AI Trends Editor John Desmond about managing the risk of AI rollouts, addressing security of the organization and realizing the benefits of new AI technologies.

Gourley is an experienced CTO and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. Creator and publisher of the widely read CTOvision site and co-founder of OODA LLC, a unique team of international experts capable of providing in advanced intelligence and analysis, strategy and planning support, investment and due diligence, risk and threat management. Among past positions, he has served as the CTO for the Defense Intelligence Agency.

Bray also is a C-Suite leader with experience in bioterrorism response, thinking differently on humanitarian efforts and crafting national security strategies, as well as leading a national commission focused on the U.S. Intelligence Community’s research and development and leading large-scale digital transformations. He has advised six different startups and is Executive Director for the People Centered Internet coalition provides support, expertise, and funding for demonstration projects that measurably improve people’s lives.

Both are co-chairing and speaking at the AI World Government Conference & Expo, being produced by Cambridge Innovation Institute. The event will be held on June 24-26, 2019 at the Ronald Reagan Building and International Trade Center in Washington, DC.

AI Trends: What opportunities can AI assist with now to improve the risk management of organizations?

Bob Gourley: AI can contribute to mitigating risks in organizations of all sizes. For smaller businesses that will not have their own data experts to field AI solutions, the most likely contributions of AI to risk mitigation will be by selecting security products that come with AI built in. For example, the old-fashioned anti-virus of years ago that people would put on their desktops has now been modernized into next-generation anti-virus and anti-malware solutions that leverage machine learning techniques to malicious code. Solutions like these are being used by businesses of all sizes today. The traditional vendors, like Symantec and McAfee, have all improved their products to leverage smarter algorithms, as have many newer firms like Cylance.

Bob Gourley

Larger organizations can make use of their own data in unique ways by doing things like fielding their own enterprise data hub. That’s where you put all of your data together using a machine-learning platform capability like Cloudera’s foundational data hub, and then run machine learning on top of that yourself. Now, that requires resources, which is why I say that’s for the larger businesses. But once that’s done, you can find evidence of fraud or indications of hacking and malware much faster using machine learning and AI techniques. Many cloud-based risk mitigation capabilities also leverage AI. For example, the threat intelligence provider Recorded Future uses advanced algorithms to surface the most critical information to bring to a firm’s attention. Overall I want to make the point that organizations of all sizes can now benefit from the defensive protections of artificial intelligence.

Dr. David Bray: Bob is spot-on that what is happening is the “democratization of use of AI techniques” that now it can be available to even small-sized companies and startups that previously may not have been available, unless they had sufficient resources. He also is right about the scaling question. The additional lens that I would like to add is thinking about how AI can be used both for what an organization presents externally to the world, as well as what it does internally. For example, how you can use AI to understand if there are things on your website or in your mobile applications, that can be assessed for risk vulnerabilities on an ongoing basis?

Threats are always changing. That’s why having the ability to use continuous services to scan what you’re presenting externally with regards to a potential attack surface will be an advantage, for large and small companies.

David Bray

The other lens is to look for abnormal patterns that may be happening internal to your organization. Risk occurs between the combination of humans and technologies. Smaller companies can obtain new tools through software as a service, or bigger companies use boutique tools to look for patterns of life. These tools try to establish what should be the normal patterns of life in your organization, so that if something else shows up that doesn’t match that pattern, it’s enough to raise a flag. The overarching goal is to use AI to improve the security and resilience of the organization, in how it’s presented externally and working internally.

Where will AI introduce new challenges to the security of organizations?

David: You can think of artificial intelligence being like a five-year old that gets exposed to enough language data, learns to say, “I’m going to run to school today.” And when you ask the five-year-old, “Well, why did you say it that way as opposed, ‘To school today I’m going to run,'” which sounds kind of awkward, the five-year-old is going to say, “Well, it’s just because I never heard it said that way.”

The same thing is true for this current third wave of AI, which includes artificial neural network techniques to provide security and resilience for an organization. It’s looking for things that fit patterns or that are outside of patterns. It’s not discerning whether the patterns or things outside of the patterns are ethically correct.

[Ed. Note: Learn more about the third wave of AI.]

Bob: The two primary new challenges that AI gives to organizations that use it are, number one, your algorithms must be protected against manipulation by adversaries. If an adversary manipulates your AI algorithms, it will manipulate your results and that’s a problem. An additional problem is your data used for AI must be protected. If an adversary manipulates your data, then, of course, your results are going to be wrong. Both of those require protection. Now, you can protect those the old-fashioned way, by building up security of your enterprise, but you have to monitor them while they’re being used.

Additionally, in this category of new risks due to AI, there are problems with ethics around AI. We have seen example after example of AI that’s fielded, then produces results that unintentionally are biased. That includes a famous example of 2017 when an Amazon-based resume system used to screen Amazon job applicants for employment taught itself to be misogynist. After a time, the algorithm hated women and had to be terminated. That kind of problem with bias in algorithms that are machine learning algorithms, has to be monitored in real time to prevent that from happening. It’s a very serious security concern that increases risk. It’s the same with ethics around AI. How do you know that your AI is performing ethically over time if it’s a machine learning algorithm that changes over time? Both are serious new risks.

David: With Bob’s example, a machine learning algorithm might be compromised if it is exposed to enough bad data to train it to say things that are hateful or mean. In this instance the software and the hardware are working correctly, they haven’t been compromised, yet the algorithm is now doing things an organization probably doesn’t want it to do as a result of exposure to bad data.

What steps can the private and public sector organizations start to take now to ensure this third wave of AI benefits organizations?

David: For societies that are open and are pluralistic in nature, I think we need to have a conversation across both private and public sector interests about where we want to go in AI security and resilience. We have a military to protect against nation-state threats. Yet the open society forces the security responsibility onto the small business or startup.

And it creates an interesting challenge. We talked a little bit about the cybersecurity threats, but we also have the challenge of dealing with misinformation; we are finding more cases where bad actors are using AI to create the appearance of uniquely scripted, uniquely edited videos read by a computer narrator. They make it appear as though lots of people are having conversations or watching videos of a certain type. As a result, the cognitive thought space of open society is being challenged.

In open societies, with freedom of the press, people should be able to say whatever they want. With AI, we now have the added challenge of having to go beyond simple tests of whether an entity is a human or not. Now we need to think about who might be mass-producing a video or mass uploading videos to try and spread misinformation, overwhelm systems, or make it look as though lots of people are having video conversations about an issue. Closed autocratic societies that don’t separate their private and their public sector, can deal with misinformation simply by removing the sources or censoring it. That’s not the path you want to take in open societies.

Bob: Organizations of all sizes can take advantage of AI in multiple ways. One is you can tap into what somebody else is doing. For example, every one of us with a smartphone now has access to either Amazon or Apple or Google’s AI capabilities through voice. And so as individuals, we’re starting to use that more and more frequently. As businesses, we can use AI capabilities like that to improve our cybersecurity or improve our market understanding or shape what we need to do with our products to better serve our customers. AI is being used a lot to help with these customer 360-degree views. So I can understand everything I need to about my potential customer to better serve them and create tailored products for them. And those are solutions that are out there right now.

And so as companies use those, they have choices to make. Do you outsource to a provider who’s doing it all for you, or are you big enough to in-source it yourself? And if you in-source it, do you have a data scientist who’s managing it, or do you have a vendor providing you technologies that give your average users access to the AI? A lot of planning needs to be put into what you want to do. And that’s the first step. So building your AI strategy first, your objectives, and then proceeding on that is just the way to get involved and keep moving.

You mentioned public sector also. For government use, AI also has many use cases. Government invest in AI for counter-fraud and for law enforcement or intelligence community uses and for Department of Defense.

In the public sector, some of the uses of AI getting a lot of traction may sound a little bit boring, but they’re making huge differences. For example, in logistics and supply in the Department of Defense, using artificial intelligence to predict where supplies are needed is extremely helpful in getting the right material to the right place. And when it comes to maintenance, predicting when engine or a part on an aircraft is likely to fail is extremely important; you may be able to do some preventive maintenance to keep that engine running. The application of artificial intelligence to those kinds of use cases is already paying off. So there’s a lot of public sector investment in AI and we expect that that will continue.

Would you use any of the security software from Kaspersky? [Ed. Note: Kaspersky is a Moscow-based security software firm banned for use by the US government by order of the President in December 2017, amid concerns it was vulnerable to Kremlin influence.]

Bob: No, I would not use Kaspersky software because it’s in a country that can be influenced by bad actors that do not have U.S. national interests in mind. That’s just me. There’s a lot of global companies that might say, “Oh okay, I wouldn’t use any U.S. software either.” Well, I hate to say it, but people are going to have to start making decisions like that and for me, Kaspersky is on the no-buy list.

We know for a fact that company operates inside a country where the rule of law is only respected when the rulers of that country want it to be. So if they want to twist the company’s arm and say, “You got to do something for me,” they’ll do it. That also goes for software companies in China. The rule of law exists in China, it’s very important there, but when the Communist Party wants to do something, the rule of law is secondary. So I don’t believe we should be depending on software that we buy from China either.

What trends do you see for societies and AI for the decade ahead?

David: The overarching question that open, pluralistic societies need to ask is how they can use AI as “forces for good” in the world? Currently I would submit the world we’re going into for the next 10 years, is better positioned for closed autocratic societies that don’t separate their private from their public sector to capitalize on what AI can do, compared to those open pluralistic societies that do. This is a significant concern that pluralistic societies might become either more fragmented or fall behind when it comes to keeping up with the social uses of AI and related technologies because of how they are structured.

Please note, I’m not saying we don’t keep our private and public sectors separate. This is a strength of what we do here in the U.S. and in Europe. Yet I raise this concern that open, pluralistic societies might be at a structural disadvantage for our digital futures ahead because we’ve got to figure out how we improve our resilience as a society to these new challenges. No one answer will come from any one sector. To thrive in the future ahead will require collaborations across sectors to collectively up our game.

Learn more about Bob Gourley’s OODA.

Learn more about Dr. David Bray’s People-Centered Internet.