What We Do
How We Do
Resources
Company
Partners
Get Started
Blog

How ChatGPT and Other Generative AI Tools Impact Risk for Cyber Insurance Providers and Security Leaders

BY eSentire

July 5, 2023 | 9 MINS READ

Cybersecurity Strategy

AI/ML

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT

Enterprise organizations and cyber insurance providers can't ignore generative AI technologies like ChatGPT. Having attracted 100 million monthly active users within two months of its launch and gaining widespread adoption across many industries, ChatGPT uses a probabilistic algorithm to produce human-sounding answers.

The transformative potential of this technology is exciting – ChatGPT and other Large Language Models (LLMs) can augment your existing in-house resources, accelerate processes, and provide access to easy-to-understand knowledge. Specifically for the insurance industry and for many enterprise organizations, this new technology comes with its own set of opportunities and innovative applications that can drive efficiencies.

However, generative AI tools have become a new threat vector for the insurance industry and security leaders across the board. Threat actors are already exploiting ChatGPT and similar LLMs to write dynamic malware code that can bypass security tools and create content for phishing attacks.

From a policyholder’s perspective, it’s also important for organizations to know that ChatGPT can also compromise your cyber insurance coverage if your use of the public version of ChatGPT leads to a data breach.

Since employees are already using generative AI tools, it's essential that you accurately understand the cyber risks associated with this technology while maximizing the value of the latest large language models.

This blog will discuss how cyber insurers and policyholders need to think about mitigating risks related to ChatGPT.

Cyber risks associated with ChatGPT and Large Language Models (LLMs)

ChatGPT is evolving at lightning-fast speed, and its cyber risks are also growing exponentially. With new features like the mobile app and the ability to browse the Internet added to the product, new and unknown risks arise for its users. In addition, ChatGPT's clones pose other significant cyber risks. OpenAI built the tool on an open-source algorithm. As companies race to create their version of ChatGPT, some alternatives may have fewer restrictions that threat actors can exploit.

Here are the most common ways threat actors exploit generative AI:

Creating content for phishing attacks and business email compromises

The predictive algorithm ChatGPT uses to construct sentences presents new possibilities for advanced social engineering attacks. Threat actors can easily draft sophisticated phishing emails that sound like humans wrote them. This functionality removes one of the telltale signs of phishing emails — spelling errors.

For sophisticated threat actors, ChatGPT opens new possibilities for conducting large-scale targeted phishing attacks. Threat actors with access to compromised email accounts can train their AIs on writing styles and patterns, closely imitating the writing style of unsuspecting victims. With scripting and automation, ChatGPT will be able to create customized communications and optimize them in real time.

Writing malware code and creating dynamic malware

ChatGPT has proven to be a potent tool for developers. The model's code-generation capabilities allow programmers of all levels to automate tasks and focus on more critical aspects of their work. But the line between software and malware is thin. Although OpenAI introduced restrictions to prevent ChatGPT from being used for malicious purposes, threat actors are finding ways to get around them.

Threat actors who are able to bypass the safeguards can exploit ChatGPT to write or improve malware code. By facilitating malware creation, ChatGPT enables threat actors without technical skills to conduct cyberattacks. In fact, ChatGPT has already proven capable of writing dynamic malware that can change to bypass security tools such as endpoint detection and response (EDR) methods. Although OpenAI continuously cracks down on the illicit use of the tool, you need to be aware of the heightened risks of malware attacks.

Sensitive data breaches

Data privacy is one of the biggest concerns with ChatGPT. It is not clear how OpenAI manages, uses, stores, or shares the data of users of the free version of ChatGPT. Since ChatGPT is trained on large data sets, there are no guarantees that some of its answers won't contain your Personally Identifiable Information (PII).

Another data privacy risk involves the data provided to ChatGPT through user prompts. The public-facing version of the tool is subject to a click-through agreement. By clicking "I agree" and using the platform, users provide legal consent to share their data with Open AI. These legally enforceable agreements are notorious for being one-sided in favor of the company collecting the data. So if a user inputs sensitive client or corporate data they didn't intend to share, the remedies available to fix it are few and far between.

“In cyber [insurance] what we’re starting to see are the limitations around wrongful information collection. We’re seeing a lot of wrongful collections around BIPA [Biometric Information Privacy Act] and pixel tracking software, especially in the healthcare space. I would say that with the introduction of ChatGPT and the heightened awareness in the plaintiffs over wrongful collection, you’re probably going to see it collide in a lot of ways.”
- Peter Hedberg, VP at Cyber Underwriting at Corvus Insurance

Using ChatGPT to Increase Efficiency and Do More with Less

For insurance providers, LLMs can transform your underwriting practices by automating the research process, summarizing client information, and providing more accurate risk assessments. ChatGPT can also be used to create outlines of security policies and communications. While its work still requires human expertise and revision, it can help streamline some of the daily tasks for underwriters.

For enterprise organizations, the applications of ChatGPT across departments include handling customer service inquiries, streamlining external communications, writing emails and copy, creating presentations and software coding.

Generative AI has shown significant promise for security teams too. It can help defend insurers and policyholders against cyberattacks by addressing one of the most pressing issues in cybersecurity – a lack of resources and expertise.

According to research from Cybersecurity Ventures, there will be 3.5 million unfilled cybersecurity jobs in 2023. As companies fight to recruit new talent, existing security analysts are tackling increasing workloads that are difficult to manage. In recent years, some security practitioners have reported experiencing a 3x increase in the number of alerts per day.

"Cybersecurity has become the number one business risk. Every company needs to make sure that they're doing the due diligence and due care, protecting their company's assets and customers' data. To get cyber insurance, you have to show that you have all these things in place. But how do you do that when the demand for cybersecurity professionals is so strong? You have to look at ways to automate, streamline, and be more efficient. And I think this is where ChatGPT is really going to help out."
Greg Crowley, CISO at eSentire

Here are examples of how ChatGPT can support enterprise cyber defense efforts and create efficiencies, helping you do more with less:

Collecting threat intelligence

ChatGPT can be an important tool for gathering threat intelligence and predicting where the next threat is coming from. Generative AI can enhance your threat-hunting capabilities by finding known vulnerabilities and exploits you may be susceptible to. By scanning the dark web for new threat vectors, ChatGPT can help you anticipate potential cyber threats and get ahead of them.

Automating time-consuming tasks and simplifying complex work

AI algorithms can be trained to detect malicious activity and enable faster response. By helping you analyze large volumes of data and identify potential threats, ChatGPT can streamline time-consuming tasks such as log analysis. AI-powered tools like Microsoft Copilot can also help boost the productivity of developers and security professionals. These tools use advanced natural language processing capabilities to identify potential vulnerabilities and provide code suggestions to prevent them from being exploited.

Bridging the gap in cybersecurity expertise

As generative AI helps threat actors write malware, it can also help security professionals address their skills gaps. Thanks to its natural language processing capabilities, ChatGPT enables you to search for threats in your environment even without the knowledge of correct syntax. Security analysts can synthesize data from multiple sources into clear, actionable insights using generative AI.

How to Mitigate Cyber Risks Associated with ChatGPT Adoption

ChatGPT may revolutionize work and empower both policyholders and cyber insurance companies to be more efficient. However, you're also likely to see more cyber insurance claims associated with cyber risks stemming from malicious ChatGPT use.

AI considerations aren’t new for cyber insurance. But as the technology continues to evolve, insurance policies will need to adapt to evolving AI regulation and liability issues associated with LLMs. Given the expanding cyber risks associated with generative AI, it’s important to set comprehensive controls that allow for the safe use of LLMs without compromising your cyber insurance, the privacy of your company or clients' sensitive data.

First, paid enterprise versions of ChatGPT are an essential stepping stone to addressing data privacy concerns. These business licenses allow you more control over how the data is used, stored, and deleted. Users with Pro or Business licenses also get access to OpenAI's API, creating greater transparency.

To further ensure the safety of your sensitive data, consider using secure gateways to access these paid versions of ChatGPT. A gateway would allow users to access the LLM by using verifiable tokens. This additional step can help mitigate the data privacy risks by allowing you to control what data leaves your company's servers and ensure that any sensitive information is encrypted.

Leveraging your cybersecurity partners can help you safely integrate and operationalize the technology. Security professionals will also be an invaluable resource for guiding your team to incorporate the security capabilities of LLMs into your workflows. While ChatGPT may be effective at augmenting your in-house resources, the knowledge of seasoned cybersecurity professionals is required to rapidly identify the attackers, contain threats, and prevent operational disruption.

"There is a lot of fear out there around ChatGPT being able to create malware or certain attack types that are evasive to traditional antiviruses or endpoint detection methods. But one thing I have yet to see it do is to be able to come up with new attack tactics, techniques, and processes. It's not going to be able to come up with a brand new way that professionals aren't able to detect. "
Greg Crowley, CISO at eSentire

It's important to note that ChatGPT's malware writing capabilities are still limited to replicating existing code and using known attack techniques. With a strong cybersecurity posture with a focus on cyber resilience, you will be able to anticipate and withstand cyber attacks created by using ChatGPT. You also need to emphasize the importance of staying up-to-date with the latest developments in AI to your policyholders. As technology progresses, insurers and policyholders will need to regularly assess your security measures and policies to ensure their effectiveness.

If your in-house team is not able to provide 24/7 threat detection, investigation, and response capabilities, consider outsourcing your security operations to a trusted vendor. A multi-signal Managed Detection and Response (MDR) provider will act as an expansion of your team to conduct 24/7 threat detection and containment and provide a complete response.

To learn more about how eSentire MDR can help you build a more resilient security operation, get in touch with an eSentire cybersecurity specialist.

eSentire
eSentire

eSentire, Inc., the Authority in Managed Detection and Response (MDR), protects the critical data and applications of 2000+ organizations in 80+ countries, across 35 industries from known and unknown cyber threats by providing Exposure Management, Managed Detection and Response and Incident Response services designed to build an organization’s cyber resilience & prevent business disruption. Founded in 2001, eSentire protects the world’s most targeted organizations with 65% of its global base recognized as critical infrastructure, vital to economic health and stability. By combining open XDR platform technology, 24/7 threat hunting, and proven security operations leadership, eSentire's award-winning MDR services and team of experts help organizations anticipate, withstand and recover from cyberattacks. For more information, visit: www.esentire.com and follow @eSentire.

Read the Latest from eSentire