Here Are 2019 Cybersecurity Predictions: Artificial Intelligence

3136

WatchGuard Threat Lab research team

AI-driven chatbots go rogue In 2019, cyber criminals and black hat hackers will create malicious chatbots on legitimate sites to socially engineer unknowing victims into clicking malicious links, downloading files containing malware, or sharing private information.

Candace Worley, Chief Technical Strategist, McAfee

There are myriad decisions that must be made when a company extends their use of AI. Implications exist for privacy regulation but there are also legal, ethical, and cultural implications that warrant the creation of a specialized role in 2019 with executive oversight of AI usage. In some cases, AI has demonstrated unfavorable behavior such as racial profiling, unfairly denying individuals loans, and incorrectly identifying basic information about users. CAOs and CDOs will need to supervise AI training to ensure AI decisions avoid harm. Further, AI must be trained to deal with real human dilemmas and prioritize justice, accountability, responsibility, transparency and well-being while also detecting hacking, exploitation and misuse of data.

Jason Rebholz, Senior Director, Gigamon

Offloading decision-making to AI software Current security solutions largely rely on signature-based detections (“I have seen this before and I know it is bad”) and analytic-based detections (“this pattern of activity leads me to believe this activity is suspicious”). The analyst then reviews the activity to perform basic triage analysis in an effort to determine whether it is something truly malicious or simply a false positive. With the emergence of AI, the basic decision making will be offloaded to software. While this isn’t a replacement for the analyst, it will provide more time for them to perform more advanced decision making and analysis, which is not easily replaced with AI.

Morey Haber, CTO, and Brian Chappell, sr. director, enterprise & solutions architecture, BeyondTrust

AI on the attack. Skynet is becoming self-aware!  2019 will see an increasing number of attacks coordinated with the use of AI/Machine Learning. AI will analyze the available options for exploit and develop strategies that will lead to an increase in successful attacks. AI will also be able to take information gathered from successful hacks and incorporate that into new attacks, potentially learning how to identify defense strategies from the pattern of available exploits. This evolution may potentially lead to attacks that are significantly harder to defend against.

Malwarebytes Labs Team

Artificial Intelligence will be used in the creation of malicious executables. While the idea of having malicious artificial intelligence running on a victim’s system is pure science fiction at least for the next 10 years, malware that is modified by, created by and communicating with an AI is a very dangerous reality. An AI that communicates with compromised computers and monitors what and how certain malware is detected can quickly deploy countermeasures to create a new generation of malware. AI controllers will enable malware built to modify its own code to avoid being detected on the system, regardless of the security tool deployed. Imagine a malware infection that acts almost like “The Borg” from Star Trek, adjusting and acclimating their attack and defense methods on the fly based on what they are up against.

Mark Zurich, senior director of technology, Synopsys

There is definitely excitement and hope around what ML/AI could do for software security and cybersecurity, in particular.  A significant aspect of cybersecurity is data correlation and analytics. The ability to find individual threats, threat campaigns, and perform threat actor attribution based on multiple disparate sources of data (i.e., finding needles in haystacks) is a large part of the game.  ML/AI provides the ability to increase the speed, scale, and accuracy of this process through data modeling and pattern recognition. However, many of the articles that I’ve been reading on this topic are expressing skepticism and concern that companies will be lulled into a false sense of security that their detection efficacy is acceptable through the application of ML/AI when that may not actually be the case.  The reality of the situation appears to be that more time and investment will be required to hone the data models and patterns to make ML/AI a highly effective technology in software security and cybersecurity. We should expect to see large companies continue to invest in this technology and startup companies touting ML/AI capabilities to continue to crop up in 2019. However, it may still be a few more years until the real promise of ML/AI can be fully realized.

To read the source article, go to SCMagazine.