OWASP AI Exchange: a practical, “one-stop” guide to securing AI (not just GenAI)
If you’re trying to secure AI systems and you’re tired of piecing together guidance from dozens of PDFs, blog posts, and vendor whitepapers, the OWASP AI Exchange exists to solve that problem.
It’s a living, open source, community-maintained body of work that brings AI security and privacy into one coherent place. Importantly, it’s not limited to generative AI. The exchange covers analytical, discriminative, generative, and heuristic AI systems—and even data-centric systems without models, such as data warehouses, BI pipelines, and reporting stacks.
The exchange positions itself as a “go-to” resource because it’s not only educational guidance. It’s also actively aligned with international standardization efforts, including contributions that feed into EU AI regulation discussions and ISO/IEC work streams, giving it weight beyond a typical best-practices document.
New to Capture the Flag (CTF)?
CTFs are hands-on security challenges where you learn by solving real-world hacking scenarios. Watch the CTF 101 workshop on demand, then put your skills to the test in Fetch the Flag on Feb 12–13, 2026 (12 PM–12 PM ET).

What the Exchange actually gives you
At a practical level, the AI Exchange provides:
High-level threat overviews, including matrix-style navigation to explore the AI threat landscape.
Controls, spanning governance, engineering, and runtime considerations.
A structured risk assessment approach, covering threat identification, evaluation, treatment, and ongoing review.
Testing and privacy guidance, acknowledging that AI risk extends beyond classic security failures.
References and links to related OWASP initiatives and external standards.
Taken together, it’s designed to be something teams can work from — not just read once and set aside.
The core model: AI security \= AI-specific threats + your existing security program
One of the most important ideas in the Exchange is straightforward: AI systems are still IT systems.
You don’t replace AppSec, infrastructure security, or cloud security when AI comes into play. You extend those programs to account for new AI-specific assets and attack surfaces.
That’s why the exchange focuses on threats to assets such as:
Training and augmentation data
Model parameters
Prompts and other inputs
Outputs, including cases where output handling introduces downstream risk
This framing is useful because it avoids treating AI as a separate, abstract domain. Instead, it grounds AI security in established security fundamentals while clearly identifying what’s different.
How the Exchange organizes AI threats:
Development-time threats: Risks introduced during data collection, model training, integration, or deployment, such as data poisoning, model supply-chain compromise, and environment weaknesses.
Threats through use: Inference-time attacks like prompt injection, evasion, extraction, and misuse of system capabilities.
Other runtime threats: Risks related to input exposure, output handling, and compromise of the surrounding infrastructure.
This structure helps teams reason about when controls need to apply, not just what the threat is.
What’s actually “new” in AI security
Compared to classic application security, the Exchange highlights several risk categories that are new, amplified, or fundamentally different in AI systems:
Prompt injection and indirect prompt injection, where models are influenced through natural language inputs.
Evasion and adversarial examples, particularly in classification and detection tasks.
Data and model poisoning, including supply-chain risks affecting models or datasets.
Extraction risks, such as training data leakage, membership inference, and model replication through querying.
Output-related risks, where model output can introduce downstream security issues if not handled safely.
Over-reliance risk, where humans place undue trust in AI systems that can be manipulated or incorrect.
These risks don’t replace traditional vulnerabilities; they compound them.
Agentic AI: why the risk curve steepens
The Exchange treats agentic systems as AI-enabled software systems, but acknowledges that additional characteristics increase risk:
Action: Agents can trigger functions or workflows, making least-privilege design critical.
Autonomy: State and memory introduce new attack surfaces.
Multi-system behavior: Logic implemented implicitly (for example, through prompts) can be fragile and easy to manipulate.
Emergent behavior: Increased complexity leads to less predictable interactions and failure modes.
As autonomy increases, small design weaknesses can have larger effects.
Controls: defense-in-depth and blast-radius thinking
Rather than prescribing a single solution, the Exchange emphasizes layered controls and impact limitation.
Governance controls: AI security is treated as an organizational capability, not a one-time tool decision. Inventory, ownership, oversight, policy, education, and compliance alignment are foundational to preventing unmanaged or “shadow” AI.
Conventional security controls still matter: Secure infrastructure, access control, monitoring, rate limiting, and SDLC controls apply across the full AI system, including AI-specific assets.
AI-engineering controls: Where AI differs, the Exchange highlights data and model engineering defenses against poisoning and robustness issues, along with runtime input and output handling to detect suspicious or unsafe behavior.
Impact limitation and low-trust assumptions: The Exchange encourages minimizing sensitive data exposure, constraining privileges, adding guardrails, and assuming that AI components can behave unexpectedly or be manipulated.
The G.U.A.R.D. starter plan
To make this concrete, the Exchange proposes a simple five-step organizing framework:
Govern: Assign responsibility, define policy, educate teams, and align with compliance needs.
Understand: Train engineers and security teams on AI-specific threats and controls.
Adapt: Update threat modeling, testing, SDLC practices, supply-chain reviews, and asset inventories.
Reduce: Minimize sensitive data exposure, constrain model behavior, and limit blast radius.
Demonstrate: Produce evidence, documentation, and transparency for stakeholders and regulators.
It’s intentionally pragmatic and designed to help teams get started without boiling the ocean.
How to use the Exchange on a real project
In practice, teams can apply the Exchange by:
Starting with the risk analysis decision tree to determine which threat categories apply (for example: GenAI vs. non-GenAI, RAG usage, hosted vs. self-managed models, sensitive inputs, action-triggering outputs).
Mapping relevant threats to controls using the AI security matrix or periodic-table navigation.
Deciding on risk treatment—mitigate, transfer, avoid, or accept—and recording ownership in a risk register.
Verifying shared responsibility with vendors (model providers, hosting platforms, plugins, and tools).
Testing and monitoring continuously, recognizing that models, threats, and usage patterns evolve.
Agentic Security Engineering with Snyk
AI security does not replace application security—it builds on it. The OWASP AI Exchange reinforces that AI systems are still software systems, and effective AI security depends on the same fundamentals: visibility, prioritization, prevention, and remediation embedded in the SDLC.
Snyk’s AI Security Platform is built on a mature AppSec foundation, combining AI-powered code analysis, contextual remediation, and an industry-leading vulnerability database. This foundation enables organizations to extend existing AppSec programs into AI-native development, rather than managing AI risk as a separate or downstream concern.
Foundational aspects of AI Security Engineering:
From defined risk to coordinated action (Evo by Snyk): As AI systems become more agentic and autonomous, security workflows themselves must adapt. Evo by Snyk provides agentic security orchestration for AI-native applications, translating AI threat models and high-level security goals—such as discovery, testing, policy enforcement, and response—into coordinated actions executed across tools, pipelines, and environments. This approach aligns directly with the OWASP Exchange emphasis on defense in depth, blast-radius reduction, and continuous monitoring in systems where human-in-the-loop controls no longer scale.
Prevention at inception in AI-driven development (Snyk Studio): Many AI risks originate at creation time, when developers accept AI-generated code or wire agents into workflows. Snyk Studio embeds real-time guardrails directly into AI-assisted development, intercepting insecure patterns before they enter the codebase. This supports the OWASP Exchange core principle that AI security extends existing AppSec practices—shifting prevention earlier rather than relying on downstream detection.
Visibility, governance, and testing across the AI surface: The Exchange stresses that teams cannot secure what they cannot see. Snyk extends asset discovery and governance into the AI domain, helping organizations identify and track AI components, including models, servers, and integrations, as part of their broader software inventory. Capabilities emerging from Snyk Labs, such as AI-BOM and MCP-based scanning, support the Exchange’s focus on supply-chain risk, shared responsibility, and evidence-driven governance. These signals feed continuous testing, prioritization, and policy enforcement rather than one-time assessments.
In short, where the OWASP AI Exchange provides the map, Snyk provides the operating system that helps teams follow it, continuously, at scale, and at machine speed.
References (links & URLs)
Ready to turn AI security guidance into actionable risk management? Download When AI Goes Off-Script: Managing Non-Deterministic Risk to see how to operationalize frameworks like the OWASP AI Exchange in real-world AI programs.
Compete in Fetch the Flag 2026!
Test your security skills in our Capture the Flag event, February 12–13, 12 PM ET to 12 PM ET.