AI Attacks & Threats: What are they and how do they work?

Discover more about AI cyber-attacks and how to protect your business against them.

0 mins read

Artificial intelligence (AI) is undoubtedly changing the world as we know it. But while this emerging technology has many benefits, it also brings the possibility of AI cyber-attacks. Bad actors can use AI to increase the effectiveness of existing threats and create new attack vectors. To stay ahead of the curve, businesses must consider how to protect their systems against various AI attacks.

Why it's important for CISOs to be aware of AI attacks

CISOs must consider both threats — the greater volume of offensive attacks driven by AI technology and the possibility of their own systems getting used against them. As businesses leverage AI for more critical infrastructure, it becomes increasingly important to defend against AI attacks.

How AI is being used to enhance existing attack methods

Bad actors can leverage AI systems on their end of an offensive attack, enabling them to exploit organizations on a much larger scale. AI empowers attackers to create more convincing phishing and social engineering schemes, making staff members more likely to fall for these ploys. 

Attackers use AI to enhance the following attack methods:

  • Phishing: Attackers can leverage Generative AI to create targeted phishing emails, making these emails more believable to victims.

  • Malware & vulnerability discovery: Malicious actors can also use AI to find system exploits and create malware. For instance, recent research proved that ChatGPT can generate dynamic, mutating versions of malicious code. This new technology lowers the bar for cyber attackers by enabling novices to carry out sophisticated exploits. 

  • Social engineering: Attackers can use deep fake technology to generate audio and video of a familiar, trustworthy person, convincing targets that they are this person.

Examples of emerging AI-based attacks 

Along with enhancing existing attack vectors, AI allows bad actors to create new methods that pose unprecedented risk to today’s organizations. Some of these emerging threats include:

  • Prompt injection

  • Evasion attacks

  • Training data poisoning (AI poisoning attacks)

  • Weaponized models

  • Data privacy attacks

  • Model denial of service (Sponge attacks)

  • Model theft

Check out our guide to the OWASP LLM Top 10 for a deep dive into AI risks.

Prompt injection 

A malicious actor executes this attack by strategically entering prompts into a large-language model (LLM) that leverages prompt-based learning. They use these strategic prompts to make the model perform malicious actions. 

Evasion attacks 

Evasion attacks trick ML models by altering the system’s input. Instead of tampering with the AI, these attacks tamper with incoming data to purposely cause a system error or evade defensive measures. For example, altering a stop sign's appearance could theoretically convince a self-driving car’s AI algorithm to ignore the sign or read it as something else (a turn sign, etc.) 

Training data poisoning (AI poisoning attacks)

This type of attack manipulates the training set used by an AI model so that it will produce incorrect output, such as biases or inaccurate information. A poisoning attack usually targets AI models that leverage user data as part of their training sets.

Weaponized models 

To create a weaponized model, attackers write files in a data format used for model exchange, such as KERAS. These files often include executable, malicious code set to run at a specific point and work with a target machine or environment.

Data privacy attacks 

In some cases, ML models leverage real-life user interactions as training data. If these users share confidential information with the AI as part of their interactions, they put their organization at risk. Because the ML model stores the exchange for training purposes, attackers can theoretically tap into this sensitive data when they enter the correct series of queries. A few Samsung engineers recently came under fire because they put proprietary information into ChatGPT, increasing the organization’s risk of a data privacy attack.

Model denial of service (sponge attacks)

These attacks are a type of distributed denial-of-service (DDoS) attack. Similarly to RegexDOS, the attacker formulates a prompt for an AI system that asks for an impossible or gigantic query. This prompt then exhausts the system's resources, racking up costs in computer resources for the model owner.

Model theft 

Attackers can also seek to steal proprietary AI models through traditional avenues, such as breaking into private source code repositories via phishing or password attacks. Symbolic AI is the most vulnerable to model theft, as it’s an “expert system” with set queries and responses based on answers. To exploit Symbolic AI, attackers only need to record all the possible answers to each question, then move along the “answer tree.” 

In addition, a 2016 Cornell Tech study demonstrated that it’s possible to reverse engineer models through systemic queries, putting them at risk for model theft.

How to prevent AI attacks

While the world of AI might seem intimidating, there are seven practical tactics that businesses can follow to start defending themselves against AI cyberattacks: 

  1. Leverage AI for defense. Businesses can use AI tools for security best practices such as code analysis, knowledge base maintenance, and security posture modeling.

  2. Use suitable AI models for your use case. If your AI models are precise and tailored to their intended purposes, it is less likely that these models will be exploitable. This AI Glossary may guide you getting familiar with AI-related topics

  3. Implement employee training on cyber risk awareness. Because AI techniques such as deepfake technology can make social engineering much more believable, employees should receive in-depth training on identifying and avoiding cyber risk.

  4. Build out in-depth defense measures. To respond to emerging sophisticated threats—especially AI attacks—businesses must rely on more than one layer of security for their projects and applications.

  5. Leverage “temperature flags.” These indicators define the level of creativity that a model output provides. Higher temperature makes it harder to get repeat outputs from AI models. Businesses should use temperature flags to walk the line between too little creativity and too much freedom and risk.

  6. Secure your AI-based applications. While application security should permeate every area of app development, it’s crucial to focus on securing AI-based apps.

  7. Follow AI AppSec best practices.

How Snyk can help prevent AI attacks

Snyk’s developer-first approach empowers teams to maintain overall application security best practices and protect their models from theft. In addition, we leverage AI to make application security best practices—such as code reviews—more precise and effective. 

AI for cybersecurity is an emerging space, and our team constantly explores new features and capabilities for the Snyk platform. Follow our blog to learn more about our ongoing projects and stay up-to-date on our AI announcements.

Next in the series

Essential AI Tools to Boost Developer Productivity and Security

Explore the top AI coding and security assistants like GitHub Copilot, Snyk Code, and more that are revolutionizing development - code faster, more efficiently, and securely.

Keep reading
Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon