What We Do
How We Do
Resources
Company
Partners
Get Started
Blog

AI’s Role in Cybersecurity

BY Dustin Rigg Hillard

November 24, 2022 | 3 MINS READ

Cybersecurity Strategy

AI/ML

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT
Artificial intelligence (AI) has been seen as having great potential since 1956. Based on computing algorithms learning from real-world data, AI and machine learning have been developed to help automate tasks that are predictable and repeatable.

AI has been deployed to improve activities like customer service and sales, by helping people carry out their roles more effectively and by recommending actions to take based on previous experiences.

AI has a rapidly growing role in improving security for business processes and IT infrastructure. According to research conducted by KPMG in 2021, 93% of financial services business leaders are confident in the ability of AI to help them detect and defeat fraud.

According to IBM research in association with AQPC, 64% of companies today are using AI in some shape or form for their security capabilities, while 29% are planning their implementation. IBM also found that security users were one of the most common groups using AI in its Global AI Adoption survey for 2022, at 26%. At the same time, problems around data security held back AI adoption for around 20% of companies.

However, all this emphasis on AI for security can be misleading. While AI and machine learning techniques are materially improving fraud detection and threat detection, caution is warranted about all the hype and expectations that come with AI.

Keeping a realistic view in mind

When large volumes of consistent data are available, AI is best positioned for success. Learning based on large amounts of malicious and benign files, AI can detect and flag new examples that have the same characteristics. These automated detections exceed the capabilities for previous approaches that relied on human actions or rules-based systems because they can identify statistical patterns across billions of examples that humans are unable to analyse at scale.

Beyond identifying malicious files, AI models can now replicate human intelligence in detecting sophisticated attacks that utilise obfuscated scripts and existing IT tooling. This has been achieved by learning from large volumes of human investigations into security events and incidents, identifying the specific usage traits leveraged by novel attacks that would otherwise go unnoticed in the noise of normal IT activity.

These AI-based approaches can identify rare anomalies that indicate the actions of a sophisticated attack. However, the emphasis here is ‘can’. These models can also generate too many false positives and be confused by normal variations in activity across the organisation’s IT infrastructure and applications. This rash of alerts can then limit the ability of the human team to act because they have insufficient time to investigate all the anomalous behaviours.

Best of both worlds

Using AI effectively within your IT security processes requires balancing the accuracy of predictions with how much human effort can be devoted to investigation of potential threats. When AI has enough data and context to achieve near perfect accuracy, as with malicious file detections, the predictions can be incorporated into automated processes that stop threats without any human intervention. When AI is able to detect unusual and malicious behaviours, but still requires human investigation to determine true threats, the best approach is to ensure the investigative efforts are providing the desired value to your security program.

Implementing behavioural detection is a necessary step to keep up with the rapid innovation of attackers who are constantly working to evade detection. Putting AI-powered solutions in place can help security teams to process large volumes of data and prioritise investigations of potential threats.

To achieve this, teams have to develop a level of maturity in their processes around automation and investigation, and how items are handed off between AI-based systems and human analysts. The feedback cycle between automated detections and human analysis is critical, and AI systems become more impactful if they are able to continuously learn.

The reality today is that humans are still at the heart of any complicated cyberattack – humans will set up the attack, and humans will carry out the defensive actions and prevent any breach. The impact of AI in security will depend on how well systems incorporate new context and examples provided by expert human analysts.

Attackers are certainly becoming more creative in their approaches and tactics, finding new vulnerabilities and using automation in their attacks to amplify their capabilities with AI. However, they are only able to carry out their attacks based on what they discover.

For defenders, understanding the sheer volume of data in their own environments can provide them with a better picture of what good looks like, helping them spot and stop attackers that deviate from expected behaviour. The true value of artificial intelligence in security will be based on how well it amplifies the ability of security teams to detect and defeat attackers.

Originally posted on datacentrereview.com

Dustin Hillard
Dustin Rigg Hillard Chief Technology Officer

Dustin Rigg Hillard is responsible for leading product development and technology innovation, systems teams and corporate IT at eSentire. His vision is rooted in simplifying and accelerating the adoption of machine learning for new use cases.

Prior to eSentire’s acquisition of Versive, he was CTO at Versive, where he focused on automating security expertise and understanding normal network behavior through machine learning. Dustin was also an early data scientist for Microsoft’s Cortana and worked in ad-revenue and relevance at Yahoo! He has deep ML experience in speech recognition, translation, natural language processing, and advertising, and has published over 30 papers in these areas.

Dustin holds a Bachelor of Science, Master of Science, and Ph.D. in Electrical Engineering from the University of Washington.

Read the Latest from eSentire