Barry Shteiman (above) is information security and cybersecurity expert at Exabeam, a provider of user behavior intelligence. He recently conducted an interview on the subject of User and Entity Behaviour Analytics, how data science and machine learning are applied to cybersecurity.
What is UEBA?
BS: UEBA stands for User and Entity Behaviour Analytics and it’s an analytics-led threat detection technology.
UEBA uses machine learning and data science to gain an understanding of how Users (humans) and Entities (machines) within an environment typically behave.
As every IT environment is an interconnected web of humans and machines, UEBA helps to identify normal and abnormal behavior for both groups to provide complete visibility. Then, by looking for risky, anomalous activity that deviates from normal behaviour, UEBA helps identify cyber threats.
What would a business need UEBA for?
BS: All of the biggest data breaches, judged either by number of records breached or the importance of the data stolen, have involved attackers leveraging stolen user credentials to gain access. Businesses need UEBA because their existing threat detection tools are unable to detect hackers that are leveraging stolen, but valid, user credentials. This is because an attacker with valid credentials looks just like a regular user; the only difference is their behaviour. UEBA is needed to help enterprises find and root out attackers that impersonate employees and it does this by comparing the attacker’s behaviour using the stolen credentials with the user’s normal behaviour.
How does it work in practice?
BS: UEBA aims to understand what the ‘normal behaviour’ is for all users and entities in an environment. It does this by using data science to build out a behavioral model for each attribute of a user or machine interacting with an IT environment. Very simply, the model is built by recording a user or machine’s activities and building this up to form a profile over time. Once there is enough data, data science can be used to identify trends and form a baseline. With this in place, each time the user or entity does something that is anomalous, the model would add risk points to the profile. If the risk score reaches a certain threshold, let’s say, 90 risk points or more, the business’ security team will be notified and can investigate. This approach greatly reduces false positives because several abnormalities must occur before an analyst is alerted.
Read the source article at Computer Business Review.