Want to try it for yourself?
Artificial intelligence (AI) is undoubtedly changing the world as we know it. But while this emerging technology has many benefits, it also brings the possibility of AI cyber-attacks. Bad actors can use AI to increase the effectiveness of existing threats and create new attack vectors. To stay ahead of the curve, businesses must consider how to protect their systems against various AI attacks.
CISOs must consider both threats — the greater volume of offensive attacks driven by AI technology and the possibility of their own systems getting used against them. As businesses leverage AI for more critical infrastructure, it becomes increasingly important to defend against AI attacks.
Bad actors can leverage AI systems on their end of an offensive attack, enabling them to exploit organizations on a much larger scale. AI empowers attackers to create more convincing phishing and social engineering schemes, making staff members more likely to fall for these ploys.
Attackers use AI to enhance the following attack methods:
Phishing: Attackers can leverage Generative AI to create targeted phishing emails, making these emails more believable to victims.
Malware & vulnerability discovery: Malicious actors can also use AI to find system exploits and create malware. For instance, recent research proved that ChatGPT can generate dynamic, mutating versions of malicious code. This new technology lowers the bar for cyber attackers by enabling novices to carry out sophisticated exploits.
Social engineering: Attackers can use deep fake technology to generate audio and video of a familiar, trustworthy person, convincing targets that they are this person.
Along with enhancing existing attack vectors, AI allows bad actors to create new methods that pose unprecedented risk to today’s organizations. Some of these emerging threats include:
Training data poisoning (AI poisoning attacks)
Data privacy attacks
Model denial of service (Sponge attacks)
Check out our guide to the OWASP LLM Top 10 for a deep dive into AI risks.
A malicious actor executes this attack by strategically entering prompts into a large-language model (LLM) that leverages prompt-based learning. They use these strategic prompts to make the model perform malicious actions.
Evasion attacks trick ML models by altering the system’s input. Instead of tampering with the AI, these attacks tamper with incoming data to purposely cause a system error or evade defensive measures. For example, altering a stop sign's appearance could theoretically convince a self-driving car’s AI algorithm to ignore the sign or read it as something else (a turn sign, etc.)
Training data poisoning (AI poisoning attacks)
This type of attack manipulates the training set used by an AI model so that it will produce incorrect output, such as biases or inaccurate information. A poisoning attack usually targets AI models that leverage user data as part of their training sets.
To create a weaponized model, attackers write files in a data format used for model exchange, such as KERAS. These files often include executable, malicious code set to run at a specific point and work with a target machine or environment.
Data privacy attacks
In some cases, ML models leverage real-life user interactions as training data. If these users share confidential information with the AI as part of their interactions, they put their organization at risk. Because the ML model stores the exchange for training purposes, attackers can theoretically tap into this sensitive data when they enter the correct series of queries. A few Samsung engineers recently came under fire because they put proprietary information into ChatGPT, increasing the organization’s risk of a data privacy attack.
Model denial of service (sponge attacks)
These attacks are a type of distributed denial-of-service (DDoS) attack. Similarly to RegexDOS, the attacker formulates a prompt for an AI system that asks for an impossible or gigantic query. This prompt then exhausts the system's resources, racking up costs in computer resources for the model owner.
Attackers can also seek to steal proprietary AI models through traditional avenues, such as breaking into private source code repositories via phishing or password attacks. Symbolic AI is the most vulnerable to model theft, as it’s an “expert system” with set queries and responses based on answers. To exploit Symbolic AI, attackers only need to record all the possible answers to each question, then move along the “answer tree.”
In addition, a 2016 Cornell Tech study demonstrated that it’s possible to reverse engineer models through systemic queries, putting them at risk for model theft.
While the world of AI might seem intimidating, there are seven practical tactics that businesses can follow to start defending themselves against AI cyberattacks:
Use suitable AI models for your use case. If your AI models are precise and tailored to their intended purposes, it is less likely that these models will be exploitable.
Implement employee training on cyber risk awareness. Because AI techniques such as deepfake technology can make social engineering much more believable, employees should receive in-depth training on identifying and avoiding cyber risk.
Build out in-depth defense measures. To respond to emerging sophisticated threats—especially AI attacks—businesses must rely on more than one layer of security for their projects and applications.
Leverage “temperature flags.” These indicators define the level of creativity that a model output provides. Higher temperature makes it harder to get repeat outputs from AI models. Businesses should use temperature flags to walk the line between too little creativity and too much freedom and risk.
Secure your AI-based applications. While application security should permeate every area of app development, it’s crucial to focus on securing AI-based apps.
Snyk’s developer-first approach empowers teams to maintain overall application security best practices and protect their models from theft. In addition, we leverage AI to make application security best practices—such as code reviews—more precise and effective.
AI for cybersecurity is an emerging space, and our team constantly explores new features and capabilities for the Snyk platform. Follow our blog to learn more about our ongoing projects and stay up-to-date on our AI announcements.
Next in the series
Securing the software supply chain with AI
Discover how AI is both a threat and a solution for securing software supply chains. Learn about emerging AI attack vectors, AI-powered defenses, AIBOMs, and how Snyk can help.Keep reading