- Adversarial AI
The use of malicious inputs or techniques to deceive, manipulate, or exploit AI models. In cybersecurity, adversarial AI can bypass defenses, cause misclassification, or degrade system performance.
AI Glossary
Every AI security professional needs a trusted reference. Snyk’s AI glossary eliminates the uncertainty and equips you with the essential terms you need, from AI threats and defenses to operational frameworks.
AI security moves fast, our glossary keeps up—regularly updated to give you accurate, up-to-date definitions. Whether you’re assessing risks, evaluating safeguards, or designing secure AI systems, your definitional guide to AI security starts here.

No results found
- Agentic AI
An AI system capable of autonomous decision-making and taking actions to achieve defined goals without constant human intervention. In cybersecurity, agentic AI can automate complex defense or attack strategies.
- AI agent
A software entity powered by artificial intelligence that performs tasks on behalf of a user or system. AI agents in security contexts can monitor threats, respond to incidents, or execute automated workflows.
- AI Attacks
Cyberattacks enhanced or automated by AI to increase speed, scale, and sophistication, including adaptive phishing, automated exploitation, and evasion.
- AI Bias
In AI, bias refers to systemic errors that produce unfair outcomes. Fairness is the mitigation of these biases to ensure equitable and ethical AI performance.
- AI Bill of Rights
A policy framework outlining principles to protect individuals from harmful AI use. It addresses privacy, discrimination, transparency, and accountability in AI systems.
- AI Cloud Security
The protection of AI workloads, models, and data hosted in cloud environments. It combines cloud security practices with AI-specific safeguards against data leakage, model theft, and adversarial attacks.
- AI Code Generation
The automated creation of source code using AI models like LLMs. In cybersecurity, it speeds development but also requires security reviews to prevent vulnerabilities.
- AI Compliance
The adherence of AI systems to relevant laws, regulations, and industry standards. In cybersecurity, this includes data protection, auditability, and governance frameworks for responsible AI use.
- AI Explainability
The ability to interpret and understand how an AI model makes decisions. Essential for trust, compliance, and debugging in security systems.
- AI Fairness
The principle of ensuring that AI systems produce unbiased and equitable outcomes across all user groups. In cybersecurity, AI fairness reduces the risk of discriminatory threat assessments or access controls, supporting compliance and ethical AI deployment.
- AI Hallucinations
When an AI model generates false or misleading outputs that appear plausible. In security, hallucinations can lead to misinformation or faulty automation.
- AI Inference
The process of using a trained AI model to generate predictions, classifications, or responses based on new input data. In cybersecurity, inference powers real-time threat detection, anomaly identification, and automated decision-making without retraining the model.
- AI Intrusion
Unauthorized access, manipulation, or disruption of AI systems, models, or data pipelines. Can lead to compromised outputs, service outages, or exposure of sensitive information.
- AI Jailbreaks
A method to bypass an AI system’s safety constraints, enabling it to produce restricted or harmful outputs. A critical concern for AI security testing.
- AI Model
The algorithmic structure trained on data to perform tasks such as detection, classification, or generation. Security experts evaluate models for robustness and resistance to attacks.
- AI Red Teaming
A proactive security exercise that simulates adversarial attacks against AI systems to identify vulnerabilities and improve resilience.
- AI Risk Assessment & Management
The process of identifying, evaluating, and mitigating risks related to AI systems. In cybersecurity, it covers model robustness, supply chain vulnerabilities, and compliance risks.
- AI SecOps
The integration of AI into security operations to automate threat detection, incident response, and SOC workflows. Enhances speed and accuracy in cybersecurity defense.
- AI Security Guardrails
Predefined safety and compliance boundaries that prevent AI from generating unsafe or unauthorized outputs. Essential in enterprise AI deployments.
- AI Threat Hunting
The use of AI tools to proactively search for hidden cyber threats across networks and systems, often detecting patterns missed by human analysts.The use of AI tools to proactively search for hidden cyber threats across networks and systems, often detecting patterns missed by human analysts.
- AI Transparency
The practice of making AI system operations, decision-making processes, and limitations understandable to stakeholders. In cybersecurity, transparency supports trust, auditing, and regulatory compliance.
- AI TRiSM
Short for Trust, Risk, and Security Management in AI. A governance approach to ensure AI systems are safe, reliable, and compliant.
- AI Trust
The confidence stakeholders have in an AI system’s reliability, fairness, and security. Built through transparency, testing, and responsible use.
- AIBOMs (AI Bill of Materials)
AI Bill of Materials. A detailed inventory of AI components, datasets, and dependencies. Helps track provenance and manage AI supply chain risks.
- AISPM (AI Security Posture Management)
The continuous assessment and improvement of an organization’s AI-related security posture, including model governance and threat monitoring.
- AWS AI Security
Amazon Web Services’ tools and frameworks for securing AI workloads, models, and data. Includes threat detection, compliance controls, and AI service protection.
- Chat Memory Manipulation
A technique where an attacker or user alters an AI system’s stored conversation history to influence future responses or behaviors. It can be exploited to bypass safeguards, inject malicious instructions, or distort an AI agent’s decision-making
- Chatbot
An AI-powered conversational interface that interacts with users via text or voice. In cybersecurity, chatbots can automate support and incident triage.
- Compound AI Systems
Integrated AI architectures combining multiple models or agents to perform complex tasks. In cybersecurity, they enable multi-layered defense strategies.
- Context Engineering
The process of designing and structuring the context given to AI systems to optimize their outputs. Crucial for accurate cybersecurity automation.
- Cryptojacking
Unauthorized use of a system’s resources to mine cryptocurrency, often through malware or browser exploits. AI can both detect and be exploited for such attacks.
- Dark AI
Malicious or unethical use of AI for cybercrime, espionage, or disinformation campaigns. Increasingly relevant in threat intelligence.
- Data Exfiltration
Unauthorized transfer of data from a system, often by malicious insiders or external attackers. AI can detect anomalies indicating exfiltration attempts.
- Data Mining
The process of discovering patterns and insights from large datasets. In cybersecurity, data mining can reveal hidden threats or user behavior anomalies.
- Data Poisoning
A cyberattack strategy that undermines AI model performance by deliberately corrupting datasets through injecting malicious data, mislabeling, or other means.
- Data Security in AI
The safeguarding of data used in AI systems across its lifecycle, including collection, storage, training, and inference. Protects against breaches, leaks, and data poisoning attacks.
- Deep Learning
A subset of machine learning using neural networks with many layers to model complex data patterns. Powers many AI-driven cybersecurity tools.
- DSPM (Data Security Posture Management)
The process of continuously monitoring and improving the security posture of sensitive data across systems and AI workloads.
- Edge AI
AI processing that is performed locally on devices rather than in the cloud, reducing latency and enhancing data privacy in cybersecurity applications.
- ETL (Extract, Transform, Load)
A data integration process that collects, cleans, and moves data into storage or analytics systems. Vital for preparing AI security datasets.
- Evals
Short for evaluations, these are systematic tests to measure AI system performance, safety, and reliability under varied scenarios.
- Explainability
The ability to interpret and understand how an AI model makes decisions. Essential for trust, compliance, and debugging in security systems.
- Fine Tuning
The process of adapting a pre-trained AI model to a specific task or domain. In security, fine-tuning improves detection accuracy for targeted threats.
- Generative AI
AI models that create new content, such as text, code, or images. Cybersecurity uses include threat simulation and automated phishing generation.
- GRC (Governance, Risk & Compliance)
A framework for managing an organization’s governance, risk management, and regulatory compliance, now extended to AI systems.
- Human-in-the-loop
This term refers to human involvement in machine learning processes and AI workflows. Human experts provide insights, expertise, and judgment to
- Hybrid AI
Combines multiple AI approaches—such as symbolic reasoning and machine learning—for better accuracy and adaptability in cybersecurity.
- IDP (internal Developer Portal)
A centralized hub for developers to access tools, documentation, and services. In security, it ensures controlled access to AI development resources.
- LangChain
An AI framework for building applications powered by large language models with memory, chaining, and context control capabilities.
- LLM (Large Language Model)
A machine learning model trained on massive datasets to understand and generate human-like text. Used in code generation, analysis, and chatbots.
- LLMJacking
The unauthorized hijacking or exploitation of large language models, often to inject malicious prompts, steal intellectual property, or manipulate outputs for cyberattacks.
- LLMOps
The operational discipline of deploying, monitoring, and maintaining large language models in production environments.
- MCP (Model Context Protocol)
The Model Context Protocol is an open standard that allows developers to create secure, bidirectional connections between AI-powered tools and their data sources.
- MITRE ATLAS
The MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems, a knowledge base of adversary tactics, techniques, and case studies for AI systems. Guides security teams in defending AI assets.
- ML (Machine Learning)
A subset of AI that uses algorithms to learn from data and make predictions or decisions without explicit programming.
- MLOps
Practices for managing the lifecycle of machine learning models, from training to deployment, ensuring reliability and scalability.
- MLSecOps
The integration of machine learning, security, and operations practices to safeguard AI systems throughout their lifecycle.
- Model Context Prompt (MCP)
An open standard introduced by Anthropic in late 2024 that standardizes how AI models connect with external data sources and tools, enabling greater interoperability between LLMs and external systems.
- Model Theft
The unauthorized extraction or replication of a proprietary AI model, often for competitive or malicious purposes.
- Neural Network
A computational model inspired by the human brain, used in deep learning to process complex data relationships.
- NLP (Natural Language Processing)
AI techniques that allow computers to understand, interpret, and generate human language. Used in chatbots, threat detection, and intelligence gathering.
- Non-Deterministic AI
AI systems whose outputs can vary for the same input due to probabilistic processes, requiring special handling in security contexts.
- Package Hallucinations
When an AI system fabricates the existence of software packages or components, leading to potential supply chain risks.
- Pervasive AI
The integration of AI capabilities across all aspects of an organization’s operations, tools, and infrastructure. In cybersecurity, pervasive AI enables continuous, automated defense at scale.
- Predictive AI
AI systems designed to forecast outcomes or behaviors based on historical data patterns. Used in threat prediction and risk assessment.
- Prompt Engineering
The practice of crafting effective prompts to guide AI model outputs, optimizing performance for security use cases.
- Promt Injection
A malicious attempt to manipulate an AI system’s behavior by embedding harmful instructions into input prompts.
- RAG (Retrieval-Augmented Generation)
An AI approach that combines language generation with external knowledge retrieval for more accurate and context-rich answers.
- Responsible AI
The development and deployment of AI systems that are ethical, fair, transparent, and secure. In cybersecurity, responsible AI practices reduce bias, ensure compliance, and minimize misuse risks.
- Shadow AI
The unapproved use of AI tools and models within an organization, creating compliance and security risks.
- Slopsquatting
The malicious registration of packages with names similar to legitimate ones to trick developers into installing them.
- Training Data
The dataset used to teach an AI model how to perform a task. In security, its quality directly impacts detection accuracy.
- Vibe coding
A new approach to software development where developers create code through AI-powered platforms simply by describing desired outcomes in natural language, without requiring manual coding or prior programming knowledge.