What We Do
How We Do
Resources
Company
Partners
Get Started
Blog

Opinion: AL and ML choices can dramatically impact data security

BY Dustin Rigg Hillard

January 15, 2019 | 4 MINS READ

Attacks/Breaches

Cybersecurity Strategy

AI/ML

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT

As originally posted on Information Management on December 24, 2018

As networks have advanced in complexity, so have the tools and tactics of cybercriminals. Organizations increase their cybersecurity budgets and teams, yet breaches keep occurring. In the fight for stronger security, vendors are offering up AI and machine learning as a Holy Grail. But do these technologies actually deliver?

Frequent headlines make it clear that cybercriminals are currently winning battles regularly. A successful intrusion attempt need only find a single flaw in an enterprise defense, while security teams are dealing with the increasing complexity of more instrumentation, tools, data and alerts that add to the attack surface.

The increased attack surface just increases alert fatigue and distracting noise, leaving organizations looking for a better solution. Vendors tout AI and machine learning as that better solution, but the reality is that they could actually exacerbate the existing problems and perpetuate the disadvantaged posture of security teams today. 

There are three common AI issues that can deteriorate defenses:

Issue #1: No Explanations

As AI systems scan the network, they find possible problems and assign them a score – but they don’t explain why. This breaks down in trust and understanding with the humans that need to consume and act on the results.

When AI isn’t able to justify “sophisticated” detections with explanations that security analysts can understand, this adds to the cognitive load of the analyst, rather than making them more efficient and effective.

Issue #2: Too Much Information

IT security teams are already dealing with work overload; zealous implementation of AI to help detect problems only worsens the problem by increasing the number of alerts. It is easy to build models that detect new potential threats, indicators of compromise or anomalous behaviors. On the surface, it appears that these provide additional security, but in reality, this just generates more false positives that distract overburdened security operations teams from seeing real threats.

Issue #3: Generic Data

The idea behind AI is that it is intelligent – it has the ability to spot new patterns that point to potential security events. However, most AI systems actually only provide a moderate extension beyond previous rule and signature-based approaches. AI is only as powerful as the data it receives, and most implementations of AI distribute generic models that don’t understand the networks they are deployed to and are easy for adversaries to evade. When pattern detection is static across time and networks, adversaries can profile the detections and easily update tools and tactics to avoid the defenses in place.

Pivoting to Stronger Security: Three Approaches

These issues paint a potentially depressing picture of AI and ML that many teams are experiencing today, but it’s not the whole picture. AI and machine learning can be powerful tools in improving enterprise defenses, but success requires a strategic approach that avoids the weaknesses of most of today’s implementations. 

There are three key approaches that will amplify the ability of security teams to work with AI, rather than adding to their problems.

Approach #1: Pick the Right Objective

An effective AI system requires an ambitious goal that reduces the workload of the security team and automates investigation with a focus on the full adversary objective. AI systems that uncover the core behaviors that an adversary must use will give security teams a small number of true risks to investigate. Effective solutions should have very low false positive rates, generating fewer than 10 high-priority investigations per week (not the hundreds and thousands of events produced by current approaches).

Approach #2: Understanding the Environment

Attackers are forced to change tactics when IT teams focus on their core objectives. Criminals traditionally have the advantage because they can profile an environment and avoid the detections in place. AI systems can gain the advantage by understanding the environment better than the adversary can. A system that understands the specifics of an environment can identify unusual behaviors with context that adversaries could only gain with complete access to the full (and constantly updating) internal data feeds that the AI system receives to learn with.

Approach #3: Maximize Human Partnership

AI and ML systems can be designed in such a way that they provide maximum benefit to their human partners. They should offer results that automate typical analyst workloads and explain the results in a way that builds trust and, over time, accelerates the skill and experience development of humans who use AI tools. This also creates a virtuous cycle where the algorithms learn from the analyst’s actions. 

The talent shortage security teams face today means that AI tools must help fill skills gaps with automation. The tools must then provide interpretability and situational awareness to help grow the skills of security teams while also making daily operations more efficient and impactful.

Many IT security teams are drowning in undifferentiated alerts, making them not more but less effective at their critical role. AI and ML technology do, in fact, hold great promise against sophisticated attackers if the above three approaches are incorporated into the organization’s overall security strategy. Thoughtful AI deployments will help teams separate real from false alarms and focus on what matters.

Dustin Rigg Hillard
Dustin Rigg Hillard Chief Technology Officer
Dustin’s vision is founded on simplifying and accelerating the adoption of machine learning for new use cases. He is focused on automating security expertise and understanding normal network behavior through machine learning. He has deep ML experience in speech recognition, translation, natural language processing, and advertising, and has published over 30 papers in these areas.

Read the Latest from eSentire