Cyber attacks have been in the news a lot lately. From cases of ransomwareholding hospital records hostage to the hack that crippled Sony t0 the security breach that left VTech toys vulnerable, a lot of damage can be done if companies don’t adequately protect their data. But oftentimes, signs that a system has been compromised are not clear until it’s too late. Human analysts may miss the evidence, while automated detection systems tend to generate a lot of false alarms.
What’s the solution? Cue the rise of artificial intelligence, or at least AI that can work in tandem with human analysts to spot digital clues that could be signs of trouble.
A research team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx have developed anartificial intelligence platform called AI2 — or AI “squared” — that can predict cyber attacks 85 percent of the time, working together with input from human analysts. This is about three times better than benchmarks set by past systems, reducing the number of false positive results by a factor of five, the group said in a press release.
This system was tested on 3.6 billion pieces of data, or “log lines,” that were produced by millions of users over a three-month period. AI2 sifts through all the data and then clusters them into patterns through unsupervised, machine-learning. Suspicious patterns of activity are sent over to human analysts who confirm whether or not these are actual attacks or false-positives. The AI system then takes this information and includes it in models to retrieve even more accurate results for the next data set — so it gets better and better as time goes on.
Development of the system began two years ago, when PatternEx was founded. CSAIL research scientist Kalyan Veeramachaneni developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc.
The goal was to figure out how to bring artificial intelligence technology to the infotech space, Veeramachaneni told CBS News.
“We looked at a couple of machine-learning solutions, and basically would go to the data and tried to identify some structure in that data. You are trying to find outliers and the problem was there were number of outliers that we were trying to show the analysts — there were just too many of them,” Veeramachaneni said. “Even if they are outliers, you know, they aren’t necessarily attacks. We realized, finding the actual attacks involved a mix of supervised and unsupervised machine-learning. We saw that’s what worked, and that’s what was missing in the industry. We decided that we should start building such a system — machine-learning that also involved human input.”
If this collaboration between man and machine is so much effective at defending against cyber attacks, why was it missing from the industry? Veeramachaneni said that until very recently, artificial intelligence systems were just not advanced enough for this kind of prediction accuracy.