McAfee CTO Steve Grobman is Wary of AI Models for Cybersecurity

1284

Artificial intelligence continues to permeate the information security industry, but Steve Grobman has reservations about the technology’s limitations and effectiveness.

Grobman, senior vice president and CTO of McAfee, spoke about the evolution of artificial intelligence at the 2018 AI World Conference in Boston. McAfee has extolled the benefits ofusing AI models to enhance threat intelligence, which the company said is enormously valuable for detecting threats and eliminating false positives. But Grobman said he also believes AI and machine learning have limitations for cybersecurity, and he warned that the technology can be designed in a way that provides illusory results.

In a Q&A, Grobman spoke with Tech Target following AI World about the ease with which machine learning and AI models can be manipulated and misrepresented to enterprises, as well as how the barrier to entry for the technology has lowered considerably for threat actors. Here is part one of the conversation with Grobman.

Editor’s note: This interview has been edited for length and clarity.

What are you seeing with artificial intelligence in the cybersecurity field? How does McAfee view it?

Steve Grobman: Both McAfee and really the whole industry have embraced AI as a key tool to help develop a new set of cyberdefense technologies. I do think one of the things that McAfee is doing that is a little bit unique is we’re looking at the limitations of AI, as well as the benefits. One of the things that I think a lot about is how different AI is for cybersecurity defense technology compared to other industries where there’s not an adversary.

Steve Grobman, CTO, McAfee

Down the street at AI World, I used the analogy that you’re in meteorology and you’re building a model to track hurricanes. As you get really good at tracking hurricanes, it’s not like the laws of physics decide to change on you, and water evaporates differently. But, in cybersecurity, that’s exactly the pattern that we always see. As more and more defense technologies are AI-based, bad actors are going to focus on techniques that are effective at evading AI or poisoning the training data sets. There are a lot of countermeasures that can be used to disrupt AI.

And one of the things that we found in some of our research is a lot of the AI and machine learning models are actually quite fragile and can be evaded. Part of what we’re very focused on is not only building technology that works well today, but looking at what can we do to build more resilient AI models.

One of the things that we’ve done that’s one of the more effective techniques is investigating this field of adversarial machine learning. It’s essentially the field where you’re studying the technology that would cause machine learning to fail or break down. We can then use adversarial-impacted samples and reintroduce them into our training set. And that actually makes our AI models more resilient.

Thinking about the long-term approach instead of just the near term is important. And I do think one of the things I’m very concerned about for the industry is the lack of nuanced understanding of how to look at solutions built on AI and understand whether or not they’re adding real value. And part of my concern is it’s very easy to build an AI solution that looks amazing. But unless you understand exactly how to evaluate it in detail, it actually can be complete garbage.

Speaking of understanding, there seems to be a lot of confusion about AI and machine learning and the differences between the two and what these algorithms actually do for, say, threat detection. For an area that’s received so much buzz and attention, why do you think there’s so much confusion?

Grobman: Actually, artificial intelligence is an awful name, because it’s really not intelligent, and it’s actually quite misleading. And I think what you’re observing is one of the big problems for AI — that people assume the technology is more capable than it actually is. And it is also susceptible to being presented in a very positive fashion.

I wrote a blog post a while ago; I wanted to actually demonstrate this concept of how a really awful model could be made to look valuable. And I didn’t want do it with cybersecurity, because I wanted to make the point with something everybody understands, because cybersecurity is nuanced and is a complex field. Instead, I built a machine learning model to predict the Super Bowl. It took as inputs things like regular-season record, offensive strength, defensive strength and a couple other kinds of key inputs.

The model executed phenomenally. It actually predicted 9 out of 10 games correctly that were sent into the model. And the one game that it got wrong, it actually predicted both teams would win. It’s actually funny — when I coded this thing up, that wasn’t one of the scenarios I contemplated. It’s a good example of a model [that] sometimes doesn’t actually understand the nuance of the reality of the world, because you can’t have both teams win.

But, other than that, it accurately predicted the games. But the reason I’m not in Vegas making tons of money on sports betting is I intentionally built the model violating all of the sound principles of data science. I did what we call overtraining of the model. I did not hold back the test set of data from the training set. And because I trained the model on data that was actually used within these 10 games that I sent it, it actually learned who the winner of those games were, as opposed to being able to predict the Super Bowl.

If you just send it data from games that it was not trained on, you get a totally different answer. It got about 50% of the games correct, which is clearly no better than flipping a coin. The more critical point that I really wanted to make was if I was a technology vendor selling Super Bowl prediction software, I could walk in and say, ‘This is amazing technology. Let me show you how accurate it is. You know, here’s my neural network; you send in this data and, because of my amazing algorithm, it’s able to predict the outcome of the winners.’

And going back to cybersecurity, that’s a big part of the problem. It’s very easy to build something that tests well if the builder of the technology is able to actually create the test. And that’s why, as we move more and more into this field, having a technical evaluation of complex technology that is able to understand if it is biased — and if it is actually being tested in a way that will be representative of showing whether or not it’s effective — is going to be really, really important.

Read the source post at TechTarget.