Burger menu
-


Generative AI in Cybersecurity: Opportunities and Threats

In modern cybersecurity, Generative AI is viewed as a potent tool for spoofing, impersonation, and deception

Joseph Weizenbaum demonstrating the first chatbot Eliza in history
Joseph Weizenbaum demonstrating the first chatbot Eliza in history

Generative AI (GenAI) traces its origins to 1966 with the introduction of the first chatbot. The second-oldest chatbot in history named PARRY saw the light only in 1972 — it was designed to emulate thinking patterns of a patient with paranoid schizophrenia.

But it wasn’t until 2014 when the full-blown version of GenAI was introduced in a study Generative Adversarial Nets (GANs) by Ian Goodfellow and his colleagues. The work paved the way for the GAN models that were capable of synthesizing content across all media: pictures, videos, and audio.

GenAI entered a new stage in 2022 when ChatGPT was launched by OpenAI. While enjoying viral success, the Generative Pre-Trained Transformer model demonstrated that believable content or specialized computer code can be created by anyone within minutes. According to a report there was a 135% spike in spam emails of improved English writing quality in April 2023.

GenAI and ChatGPT roadmap in cybersecurity
GenAI and ChatGPT roadmap in cybersecurity

Generative AI in Cyber Offense

Experts outline several attack scenarios featuring textual GenAI:

  • Illegal data extraction. ChatGPT and other Large Language Models (LLMs) like Google Bard can be requested to gather illegal, unethical, or private info with a jailbreak maneuver — command, such as 'Do Anything Now' (DAN), that instructs a bot to bypass its rules.
  • Social engineering. An attack can be artfully disguised as a message coming from a legitimate entity, prompting a victim to perform a certain action. 
  • Phishing. A highly personalized attack aimed at a specific person or entity. Like social engineering, it employs mimicking a certain tone and language that are familiar to a victim. However, phishing considers subtle details evoking a sense of trust.
  • Automated hacking. LLMs are also capable of generating code, which can be used for automating hacking attacks and scanning software’s code for vulnerabilities. 
  • Payload generation. Payload is a command that initiates an unauthorized task: harvesting information, deleting data, or opening doors for malware or other cyberattacks.
  • Malware generation. Finally, it’s possible to create various types of malware without any knowledge of coding: ransomware, virus, adware, ‘CPU killers’, and polymorphic viruses.

Additionally, generative models can be used for scripting fabricated media. Even though it doesn’t challenge cybersecurity directly, it still involves anti-spoofing as the only effective countermeasure.

Sample of a ransomware NotPetya code delivered by ChatGPT
Sample of a ransomware NotPetya code delivered by ChatGPT

Generative AI in Cyber Defense

At the same time, GenAI can be successfully used for averting cybersecurity threats, including some of those mentioned above:

  • Defense automation. It is suggested that ChatGPT can be used to relieve workload that a Security Operations Center (SOC) would typically receive, especially if it lacks qualified employees. It can also efficiently scan server logs to detect anomalies, which signalize that an attack is in progress.
  • Security reports. When granted access to an entity’s data, a GenAI can effectively scrutinize cybersecurity-related data and provide detailed, automatically generated reports.
  • Threat detection. An LLM model is able to formulate another comprehensive report, based on both internal and external data — news, social media, online forums, and so on. It will help detect the latest attack scenarios and figure out their patterns.    
  • Providing secure code. A generative tool can also be used for attesting secure code and even writing it.
  • Raising awareness. Using the collected data, GenAI can also formulate accurate descriptions of cyber-attacks, their algorithms, typology, and other nuances. Besides, this can help train entry-level specialists.
  • Alert/response regime. A ChatGPT-like solution can complement a security system and deliver urgent notifications as soon as an attack has been registered by a security system. Additionally, it can deliver an adequate security response immediately, as well as providing incident response guidelines to the human staff.
  • Detecting malware. By training on a malware dataset — the one containing signatures and snippets of the malicious code — a GenAI model can detect malware’s typical modus operandi and differentiate its types: Trojans, viruses, ransomware, etc.

It's noteworthy that AI text-generators can formulate ethical guidelines based on regulatory documents, such as the IEEE’s Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (see here).

Schematic representation of a chatbot’s work
Schematic representation of a chatbot’s work

Solutions and Frameworks

A constellation of Gen AI-based solutions has been proposed, among which are Google Cloud Security AI Workbench, CrowdStrike Charlotte AI, Synthesis Humans, Microsoft Security Copilot, and others. Here is a brief review of some promising frameworks:

ThreatGPT

Developed by Airgap Networks, ThreatGPT is based on a Zero Trust paradigm, in which any actors — neither outside nor inside — can be trusted, unless their identity is cautiously checked. 

ThreatGPT merges traditional perimeter firewall infrastructure with agentless microsegmentation — an approach, which separates a single data center into a group of smaller security segments, while all traffic is controlled by a third party. 

For security purposes, it employs a GPT-3 model to detect potential threats through natural language queries and graph databases to analyze traffic relationships and anomalies.

ChatIDS

Information flow in ChatIDS
Information flow in ChatIDS

IDS stands for Intrusion Detection System. The purpose of ChatIDS is to provide accurate real-time alerts coupled with explanations and instructions that a regular end-user with no special training or education can understand. For this an LLM is used, which translates technical signals into easy-to-understand messages. They are secured with a three-way anonymization, which, in turn, protects the user’s identity.

SecureFalcon

SecureFalcon is proposed as an LLM-based solution, which detects vulnerable C code elements, differentiates them from non-vulnerable, and also serves as a repair tool. It’s based on the FalconLLM40B model, training of which included 3D parallelism strategy.

Defense architecture used in SecureFalcon
Defense architecture used in SecureFalcon

Cyber Kill Chain

The Cyber Kill Chain framework
The Cyber Kill Chain framework

Cyber Kill Chain (CKC) is a framework developed at Lockheed Martin. Inspired by a quote from Sun Tzu’s “The Art of War”, it focuses on understanding the attacker’s motivation and dissecting the attack algorithms. According to CKC, an attack typically includes the following steps:

  • Reconnaissance. Researching targets or victims.
  • Weaponization. Selecting attack vectors and tools.
  • Delivery. ‘Transporting’ weapons to the assault area.
  • Exploitation. Stage, at which vulnerabilities are exploited.
  • Installation. Installing malware.
  • Command & control. Installed malware allows to puppeteer the targeted system.
  • Action on objectives. Attackers get to achieving their goals.

As a countermeasure, CKC suggests Detection, Deception, Adversarial Training, Red & Blue Teaming, Explainable AI (XAI), User Training & Awareness, and Streamlining Existing Security Strategies

Defense tools recommended for accompanying Cyber Kill Chain
Defense tools recommended for accompanying Cyber Kill Chain

Large Language Models in Cybersecurity

A number of solutions based on LLMs offer unconventional tactics to enhance cybersecurity. One of the examples is the SecurityScorecard platform based on GPT-4. Basically, it provides a consulting service necessary for risk management. It evaluates the defense levels of the vendors that an entity is in partnership with and can answer various questions regarding their cybersecurity. 

OpenAI's classifier can automatically detect and watermark AI-generated text, such as in business emails. However, its current reliability is rather low with only 26% of true positives being identified (the false positive rate is 9%.)

Attacks on ChatGPT

Since the introduction of ChatGPT, various techniques have been devised to exploit it for malicious purposes. Known as jailbreaks, they can be characterized as injection attacks to an extent. Perhaps, the most famous example is the Do Anything Now request, which ‘liberates’ ChatGPT from its constraints, allowing it to create unethical, dangerous, or deceitful content.

Jailbreak maneuvers allowing to extract private info from ChatGPT
Jailbreak maneuvers allowing to extract private info from ChatGPT

Another example is the SWITCH method, which prompts the model to assume an ‘opposite persona’, so it can step away from the limitations set by the developers and fulfil a ‘forbidden’ request. Finally, a maneuver known as Character Play implies that ChatGPT will have to follow any orders when playing a certain role that a user commands it to play.

References

  1. Joseph Weizenbaum demonstrating the first chatbot Eliza in history
  2. From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy
  3. ChatIDS: Explainable Cybersecurity Using Generative AI
  4. SecureFalcon: The Next Cyber Reasoning System for Cyber Security
  5. Impacts and Risk of Generative AI Technology on Cyber Defense
  6. Multi-step Jailbreaking Privacy Attacks on ChatGPT
  7. ELIZA: a very basic Rogerian psychotherapist chatbot
  8. PARRY: The AI Chatbot From 1972
  9. Generative Adversarial Nets
  10. Major Upgrade to Darktrace/Email™ Product Defends Organizations Against Evolving Cyber Threat Landscape, Including Generative AI Business Email Compromises and Novel Social Engineering Attacks
  11. Tricking ChatGPT: Do Anything Now Prompt Injection
  12. What are Polymorphic Viruses?
  13. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
  14. Supercharge security with AI
  15. Introducing Charlotte AI, CrowdStrike’s Generative AI Security Analyst: Ushering in the Future of AI-Powered Cybersecurity
  16. Synthesis Humans
  17. Introducing Microsoft Security Copilot
  18. Airgap Networks Launches ThreatGPT to Protect Operational Technology Environments with AI/ML Technologies
  19. Zero trust security model, From Wikipedia
  20. FalconLLM40B model
  21. Model Parallelism
  22. Cyber Kill Chain
  23. SecurityScorecard
  24. New AI classifier for indicating AI-written text
Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Contents

Hide