What We Do
How we do it
Our Threat Response Unit (TRU) publishes security advisories, blogs, reports, industry publications and webinars based on its original research and the insights driven through proactive threat hunts.
View Threat Intelligence Resources →
Mar 15, 2023
CVE-2023-23397 - Microsoft Outlook Elevation of Privilege Zero-Day Vulnerability
THE THREAT On March 14th, as part of Microsoft’s monthly Patch Tuesday release, the company disclosed a critical, actively exploited vulnerability impacting Microsoft Office and Outlook. The…
Read More
View all Advisories →
About Us
eSentire is The Authority in Managed Detection and Response Services, protecting the critical data and applications of 2000+ organizations in 80+ countries from known and unknown cyber threats. Founded in 2001, the company’s mission is to hunt, investigate and stop cyber threats before they become business disrupting events.
Read about how we got here
Leadership Work at eSentire
Mar 20, 2023
Exertis and eSentire Partner to Deliver 24/7 Multi-Signal MDR, Digital Forensics & IR Services and Exposure Management to Organisations Across the UK, Ireland, and Europe
Basingstoke, UK– 20 March, 2023. Leading technology distributor, Exertis, announced today that it has bolstered its cybersecurity services, adding eSentire, the Authority in Managed Detection and Response (MDR), to its Enterprise portfolio of offerings. eSentire’s award-winning, 24/7 multi-signal MDR, Digital Forensics & Incident Response (IR), and Exposure Management services will be available…
Read More
e3 Ecosystem
We provide sophisticated cybersecurity solutions for Managed Security Service Providers (MSSPs), Managed Service Providers (MSPs), and Value-Added Resellers (VARs). Find out why you should partner with eSentire, the Authority in Managed Detection and Response, today.
Learn more
Apply to become an e3 ecosystem partner with eSentire, the Authority in Managed Detection and Response.
Login to the Partner Portal for resources and content for current partners.
Blog — Nov 24, 2022

AI’s Role in Cybersecurity

3 minutes read
Speak With A Security Expert Now
Artificial intelligence (AI) has been seen as having great potential since 1956. Based on computing algorithms learning from real-world data, AI and machine learning have been developed to help automate tasks that are predictable and repeatable.

AI has been deployed to improve activities like customer service and sales, by helping people carry out their roles more effectively and by recommending actions to take based on previous experiences.

AI has a rapidly growing role in improving security for business processes and IT infrastructure. According to research conducted by KPMG in 2021, 93% of financial services business leaders are confident in the ability of AI to help them detect and defeat fraud.

According to IBM research in association with AQPC, 64% of companies today are using AI in some shape or form for their security capabilities, while 29% are planning their implementation. IBM also found that security users were one of the most common groups using AI in its Global AI Adoption survey for 2022, at 26%. At the same time, problems around data security held back AI adoption for around 20% of companies.

However, all this emphasis on AI for security can be misleading. While AI and machine learning techniques are materially improving fraud detection and threat detection, caution is warranted about all the hype and expectations that come with AI.

Keeping a realistic view in mind

When large volumes of consistent data are available, AI is best positioned for success. Learning based on large amounts of malicious and benign files, AI can detect and flag new examples that have the same characteristics. These automated detections exceed the capabilities for previous approaches that relied on human actions or rules-based systems because they can identify statistical patterns across billions of examples that humans are unable to analyse at scale.

Beyond identifying malicious files, AI models can now replicate human intelligence in detecting sophisticated attacks that utilise obfuscated scripts and existing IT tooling. This has been achieved by learning from large volumes of human investigations into security events and incidents, identifying the specific usage traits leveraged by novel attacks that would otherwise go unnoticed in the noise of normal IT activity.

These AI-based approaches can identify rare anomalies that indicate the actions of a sophisticated attack. However, the emphasis here is ‘can’. These models can also generate too many false positives and be confused by normal variations in activity across the organisation’s IT infrastructure and applications. This rash of alerts can then limit the ability of the human team to act because they have insufficient time to investigate all the anomalous behaviours.

Best of both worlds

Using AI effectively within your IT security processes requires balancing the accuracy of predictions with how much human effort can be devoted to investigation of potential threats. When AI has enough data and context to achieve near perfect accuracy, as with malicious file detections, the predictions can be incorporated into automated processes that stop threats without any human intervention. When AI is able to detect unusual and malicious behaviours, but still requires human investigation to determine true threats, the best approach is to ensure the investigative efforts are providing the desired value to your security program.

Implementing behavioural detection is a necessary step to keep up with the rapid innovation of attackers who are constantly working to evade detection. Putting AI-powered solutions in place can help security teams to process large volumes of data and prioritise investigations of potential threats.

To achieve this, teams have to develop a level of maturity in their processes around automation and investigation, and how items are handed off between AI-based systems and human analysts. The feedback cycle between automated detections and human analysis is critical, and AI systems become more impactful if they are able to continuously learn.

The reality today is that humans are still at the heart of any complicated cyberattack – humans will set up the attack, and humans will carry out the defensive actions and prevent any breach. The impact of AI in security will depend on how well systems incorporate new context and examples provided by expert human analysts.

Attackers are certainly becoming more creative in their approaches and tactics, finding new vulnerabilities and using automation in their attacks to amplify their capabilities with AI. However, they are only able to carry out their attacks based on what they discover.

For defenders, understanding the sheer volume of data in their own environments can provide them with a better picture of what good looks like, helping them spot and stop attackers that deviate from expected behaviour. The true value of artificial intelligence in security will be based on how well it amplifies the ability of security teams to detect and defeat attackers.

Originally posted on datacentrereview.com

View Most Recent Blogs
Dustin Rigg Hillard
Dustin Rigg Hillard Chief Technology Officer
Dustin’s vision is founded on simplifying and accelerating the adoption of machine learning for new use cases. He is focused on automating security expertise and understanding normal network behavior through machine learning. He has deep ML experience in speech recognition, translation, natural language processing, and advertising, and has published over 30 papers in these areas.