Why AI supply chain risk has outgrown the SBOM model
For years, software bills of materials were built on assumptions that generally held true. Dependencies were static. Release cycles were predictable. By inventorying what went into an application at build time, teams could make informed risk decisions later. That approach worked because traditional software behaved in bounded, predictable ways. AI breaks those assumptions almost immediately.
AI supply chains shift rapidly. New open source models appear overnight. Agent frameworks evolve in public repositories faster than most security teams can keep track of. MCP servers, prompts, datasets, and orchestration tools change constantly as developers experiment. The pace is not just faster. It is fundamentally different.
AI components are also not passive ingredients. They actively shape the system as it runs. Models can pull in new tools at runtime. Agents generate prompts, chain actions, and invoke services never defined in the original code. Components create new relationships and execution paths on the fly, meaning the supply chain is no longer fixed at build time.
This is where the SBOM model starts to break down. A static list of components cannot capture a system that changes as it operates. Security teams need more than a snapshot of what existed at the time of commit. They need visibility into how AI components interact, what they call, and how those relationships evolve.
That need is what led to the idea of an AI bill of materials. But the term can be misleading. An AIBOM is not a checklist. It is a living map of components, behaviors, and connections that reflects how AI systems actually function. Static artifacts are no longer enough when the software itself is designed to change.
The rising problem: Half of the AI supply chain lives outside the repo
Once you accept that AI supply chains are dynamic, another problem comes into view. What exists in the repository is only part of the picture. A growing portion of the AI supply chain never reaches source control. It runs on developer machines.
Developers are installing and experimenting with AI tools locally, often at a pace that is faster than security teams can track. At a large media organization, teams set up local MCP servers to increase speed, only to discover that security had no visibility into where those servers were located or what they accessed. At a global investment firm, developers tested local AI agents and automation tools with broad access to internal systems, without review. In another enterprise, local MCP tooling was formally banned, yet developers continued to use it because it simplified their work.
This behavior is not malicious; rather, it’s practical. Developers use the tools that help them solve problems. Risk appears when those tools operate outside the controls that security teams depend on.
Local AI agents and MCP servers often have direct access to pipelines, credentials, and internal services. Data can move without documentation or oversight. Because these components never pass through CI/CD, SAST, or SCA workflows, their behavior remains largely unseen, despite being only a laptop away from production.
This shift has turned shadow IT into something more complex. Shadow IT was about unapproved SaaS. Shadow AI includes unapproved models, local tools, MCP servers, agents, and experimental frameworks running on developer endpoints. It is local, fast-changing, and driven by individuals rather than procurement.
The result is a widening visibility gap. Even the most complete repository view misses AI components that never enter it. As AI development accelerates, developer machines have become one of the least understood parts of the AI supply chain.
Why AI-BOM alone doesn’t solve this (but it’s a start)
An AI bill of materials brings much-needed structure to modern AI systems. It provides visibility into models referenced in source code, agent frameworks in repositories, MCP server configurations, prompts, datasets, and declared dependencies. For AI components that are committed, reviewed, and versioned, this insight is valuable.
That visibility creates a baseline. It enables security teams to transition from assumptions to facts and start governing AI usage with confidence. Within source control, AI-BOM provides a clear view of what exists and how components are connected. The limitation is what never reaches the repository.
AI-BOM cannot see local MCP toolchains on developer machines, AI agent runtimes used for testing, LLM clients installed on endpoints, or CLI tools and agentic frameworks developers experiment with locally. Many of these tools never make it into GitHub at all.
This creates a blind spot. Repository-based visibility reflects intent, not everything that actually runs during development. Once AI tooling extends beyond CI/CD and traditional scanning coverage to laptops, it falls outside the reach of AIBOM, even though those components often have access to sensitive data and internal systems.
AIBOM alone does not solve the problem. It is a necessary starting point, but it only tells part of the story. When AI tooling operates outside the repository, so does a meaningful share of risk.
The solution: Unifying AI-BOM + developer desktop visibility
Closing the gap between repositories and developer machines does not require more friction. It requires a different visibility model. Rather than adding heavy endpoint agents or forcing new workflows, the emerging approach focuses on correlating existing information across environments.
Repository-based AIBOM detection continues to excel in its core function, identifying models, agent frameworks, prompts, datasets, and configurations in source control. Lightweight scanning on developer machines complements this by surfacing local MCP servers, agent runtimes, and AI tooling as they are actually used. These scans are purpose-built and narrowly scoped, not traditional endpoint monitoring. Together, they create a coherent view of the AI supply chain as it exists in practice.
This unified view reflects what customers are asking for. Some want to track the same MCP frameworks across repositories and local environments. Others want a single place to answer a simple question: which AI components are in use anywhere in the organization? Many want guardrails that guide safe use without banning tools or disrupting developer workflows.
The strength of this approach lies in the fact that visibility becomes the control. When teams can visualize which components exist, how they are connected, and where they run, governance becomes practical. Policies can focus on approved tools, safe configurations, and version consistency rather than restrictions that developers work around.
AI Security Posture Management provides the unifying layer. It brings together repository insights, dependency relationships, local MCP and agent discovery, and governance policies into a single view. The result is not tighter control through force, but better control through understanding, allowing organizations to govern AI responsibly as it evolves.
The combined value proposition: The only complete view of your AI supply chain
When visibility spans both repositories and developer machines, the AI supply chain finally comes into focus. Code alone never tells the whole story. Local experimentation alone misses how systems are formalized and shipped. Bringing the two together creates a complete picture of how AI is actually built, tested, and used across the organization.
Most security tools only ever see one side of the picture. Some focus on what is checked into source control. Others look at isolated runtime signals or endpoint activity. AI Security Posture Management connects those views. It understands what exists in code and what runs on developer desktops, and it correlates them into a single, coherent model. That combined perspective is what turns fragmented signals into actionable insight.
This matters because restriction has proven to be an ineffective control. Banning MCP servers does not stop their use. Banning local agents only pushes experimentation further out of sight. Developers will continue to adopt tools that help them move faster. The difference between unmanaged risk and controlled adoption is visibility. When teams can see which AI components are in use, where they run, and how they interact, they can guide behavior through policy and configuration rather than prohibition.
Staying ahead of that change requires constant awareness of emerging trends. The AI ecosystem does not stand still. New agent frameworks appear regularly. MCP servers evolve. Open source models change. Orchestration tools, dataset loaders, and automation libraries expand the surface area every month. Customers rely on Snyk to track that evolution and translate it into detection and context that they can act on.
Rounding out the story, discovering repositories and development endpoints shows what is exposed, but Snyk Code closes the loop by revealing how those exposures were created. Organizations also need to leverage SAST solutions to map runtime and asset-level findings back to the vulnerable code paths that introduced them. This enables teams to transition from fragmented detection to coordinated remediation, where code fixes automatically reduce downstream risk across endpoints and repository environments related to AI-powered systems.
That research foundation is what keeps the view complete over time. As the AI supply chain grows and shifts, the ability to recognize new components and understand their role becomes just as important as seeing what already exists. The result is a living understanding of AI risk that keeps pace with how development actually works.
AIBOM + developer desktop visibility = the new AI security baseline
When visibility spans both repositories and developer machines, the AI supply chain finally becomes understandable. Code shows what teams intend to ship. Developer environments reveal how AI is explored, tested, and used. Together, they provide the context security teams need to move from reactive guesswork to informed governance.
This matters because restriction does not scale. Blocking MCP servers or local agents only pushes experimentation out of sight. Developers will continue to adopt tools that help them move faster. Visibility is what separates unmanaged risk from controlled adoption. When teams can see which AI components are in use, where they run, and how they interact, they can guide behavior through policy and configuration rather than disruption.
Maintaining an accurate view requires continuous awareness of change. The AI ecosystem evolves quickly, with new agent frameworks, MCP servers, models, and tooling emerging constantly. The ability to recognize those components and understand their role is what turns visibility into a durable security posture.
The AI supply chain is evolving faster than traditional security models can keep up. Discover how continuous visibility helps you stay ahead without adding friction.
Secure your supply chain with Snyk
87% of our respondents were impacted by supply chain security issues. Keep yours secure with Snyk.