The New Threat Landscape: AI-Native Apps and Agentic Workflows

Snyk Team
June 17, 2025
0 mins readBusinesses are moving beyond AI experiments and proofs of concept. As we approach what IDC is predicting will be the “AI pivot years” of 2025-2026, organizations are prioritizing, planning, and building for scale. This shift includes AI agents — self-directed tools that automate tasks — as technology providers strive to simplify development workflows.
Under the surface, AI systems expose an expanded threat landscape that spans the software development lifecycle (SDLC). The evolution leading to more agents is exciting. Still, the lack of transparency and automated workflows (including engaging with other agents) increases the risk of security gaps without clear visibility or policy enforcement.

New threats that exploit unique AI behaviors:
Data poisoning — A type of cyber attack that manipulates the training data used to develop AI models. Outcomes range from bias and hallucinations to backdoor access and compromised models.
Prompt injection — A dangerous tactic that tricks the model into ignoring guardrails or misinterpreting its instructions, often exposing sensitive data. Open source vulnerabilities, already explored in our earlier post, remain a major concern. LLMs and related components are emerging almost daily, each bringing potential new risks. Like traditional open source elements, scanning and validating each before using them in your build is essential to reducing the surface area for possible vulnerabilities.
Federated identity gaps — AI systems often struggle with consistent authentication, authorization, and user identification, which may inadvertently expose sensitive data or risk unauthorized access. While not designed as a federated identity solution, Anthropic’s Model Context Protocol (MCP) helps standardize communication between AI agents and enterprise systems. While still new, it’s quickly becoming an industry standard.
Even organizations without custom-trained models are likely already running — or soon will be running — AI in production environments. According to Gartner, by 2026, more than 80% of independent software vendors will have embedded generative AI capabilities in their enterprise applications. AI agents are quickly following, adding new layers of complexity and risk. By 2028, Gartner predicts that 33% of enterprise software applications will include agentic AI. This vendor-driven race to deliver AI capabilities often leads to rushed releases with immature security controls.
The adoption with intent to scale, plus the inclusion of agentic AI, demands a new approach to AI application risk management and governance. According to Gartner, most enterprises have yet to implement Al TRiSM (trust, risk, and security management) technical controls or policies.
This delay stems from the unpredictable nature of AI systems. Their behavior shifts based on inputs, as data and prompts heavily influence behavior and outputs, meaning there’s no consistent baseline to monitor against. So, while AI systems have more flexibility to solve problems, it is more difficult to detect abnormal behavior that might indicate the presence of a vulnerability, threat, or non-compliance with data privacy regulations. While most application security and governance tools might be sufficient for existing environments, the opaqueness of AI environments may limit their effectiveness.
What remains constant is the value of shifting security left, embedding it early into AI workflows. Snyk’s AI Trust Platform empowers developer-first security by reducing risk from AI code tools and enabling safe, scalable AI-native applications. This includes deep visibility into AI components, including your AI BOM and critical dependencies, and ensures secure integration workflows through the Snyk MCP Server.
Want to dive deeper into securing AI-native applications? Download the white paper Building AI Trust: Securing Code in the Age of Autonomous Development to learn how Snyk helps organizations build fast and stay secure in an AI-powered world.
AI is redefining the developer experience
Build AI Trust and empower your team to develop fast and stay aligned with security goals.