Skip to main content

You Patched LiteLLM, But Do You Know Your AI Blast Radius?

Written by

April 2, 2026

0 mins read

For a brief window, a widely used open source package in the AI ecosystem was compromised with credential-stealing malware.

LiteLLM, a model gateway used to route requests to more than 100 LLM providers, has been downloaded millions of times per day. In that short window, the malicious versions were likely pulled tens of thousands of times before being caught.

In theory, this should have been reassuring: the project carried compliance certifications, security tooling was in place, and the issue was discovered and remediated quickly. But that’s exactly the problem.

Because this incident wasn’t just about a compromised dependency. It was a reminder of how modern AI systems actually fail: not at the surface, but across the layers we don’t fully see.

One early signal of that impact is already emerging. AI recruiting startup Mercor confirmed it was “one of thousands” of downstream victims impacted by the LiteLLM supply chain attack. In their case, the compromise didn’t stop at a vulnerable package; it reportedly led to large-scale data exfiltration, including source code, after stolen credentials were used to access internal systems.

That’s the part most teams miss. The risk isn’t the dependency itself; it’s what that dependency has access to at runtime. Once something like LiteLLM sits in the execution path between your application and model providers, it becomes a conduit to everything behind it: APIs, tools, agent workflows, and sensitive data.

Mercor and others weren’t breached simply because they “used LiteLLM.” They were breached because of to connection LiteLLM had.

In the original LLMlite compromise, the malware itself was relatively unsophisticated. It was noisy, crashed machines, and got caught. A more subtle version, one designed to quietly exfiltrate credentials across model providers, APIs, and agent workflows, could have gone undetected for weeks.

And if it had, most teams wouldn’t have known what was actually at risk. They would have known they were using LiteLLM.

They would not have known:

  • Which models were being routed through it

  • Which providers were involved

  • What tools and systems could those models access

  • How that risk propagated through their applications

That’s the gap. And it’s why incidents like this aren’t just supply chain problems anymore; they’re ultimately about AI system visibility.

When the LiteLLM compromise hit, most teams did exactly what you’d expect. They checked their dependencies, identified the vulnerable versions, and moved quickly to patch or pin to something safe. From a traditional application security perspective, that’s a job well done.

But it raises a more interesting question, one that most teams don’t stop to ask: What did you actually fix?

Going beyond where LiteLLM was used, and finding out what it was doing inside your system, what it was connected to, and what risk it introduced beyond the package itself. Because in AI applications, that’s where things get complicated.

Where traditional visibility breaks down

LiteLLM isn’t just another library sitting quietly in a codebase. It sits directly in the execution path between your application and the models it relies on. It routes requests, abstracts providers, and ultimately shapes how your system behaves at runtime.

That means that when LiteLLM is compromised, the impact isn’t limited to the repositories that include the package. It extends to which models are being called through it, which providers are involved, what tools those models can access, and which agent workflows depend on it.

This is where most teams lose visibility. A single line of code that specifies a model might seem insignificant, but it encodes a series of decisions about providers, capabilities, and access that aren’t captured anywhere in a dependency graph.

Multiply that across repositories, teams, and evolving agent workflows, and what emerges isn’t just a dependency problem. It’s a system that’s difficult to fully see or understand

The gap Evo AI-SPM is designed to close

Instead of focusing solely on the dependencies that exist, Evo focuses on how AI is being used. In the case of LiteLLM, that means identifying it as a model gateway, mapping which providers and models are routed through it, discovering the tools and APIs those models interact with, and connecting it all back to the agent workflows that define system behavior.

The result is an AI-BOM, a living map of the AI system itself, not just the components it’s built from.

Why context changes everything

That added context fundamentally changes how teams respond to incidents. If all you know is that LiteLLM is compromised, the next step is straightforward: fix it. But once you understand how it’s used, the response becomes more nuanced.

You can start by asking whether traffic is being routed to unapproved model providers, which agents rely on that pathway, what external systems are exposed through it, and whether any policies govern those interactions in the first place. That’s the difference between reacting to a vulnerability and understanding your actual exposure.

This is already how AI apps are built

This reflects the number of AI-powered applications already being built. A developer might use a framework to orchestrate an agent, rely on LiteLLM to abstract model access, and connect that agent to external tools or APIs to complete tasks. From a traditional perspective, this shows up as a vulnerable dependency. From a system perspective, it represents a chain of decisions, integrations, and behaviors that extend far beyond the package itself.

The AI you don’t know you have

What often surprises teams is how much AI already exists in their environments. Many believe they’re still early in their AI adoption, only to discover scattered usage of model gateways, orchestration frameworks, and emerging agent patterns across their codebases.

None of it is centralized, much of it isn’t governed, and yet it’s already part of production systems. Incidents like the LiteLLM compromise don’t create this complexity; they expose it.

You still need SCA (just not only SCA)

This isn’t a failure of Software Composition Analysis (SCA). Tools like Snyk Open Source flag compromised versions of LiteLLM, surface where those versions existed across repositories—including transitive dependencies—alert teams quickly, and provide clear remediation guidance.

That signal is foundational, and without it, teams wouldn’t even know there was an issue. But SCA is built to answer a very specific question: “Is this dependency vulnerable?” The challenge is that modern AI systems don’t stop at dependencies.

A common reaction to incidents like this is to say, “SCA already caught it.” And that’s true, but it assumes that the dependency is the system. In reality, it’s just the entry point. The system is everything that dependency enables: model access, tool execution, agent orchestration, and dynamic decision-making. If you can’t see that layer, you don’t fully understand where your risk lives.

The LiteLLM compromise is just one example, but it makes the shift clear: If you’re only looking at dependencies, you’re not seeing the system. SCA tells you that something is wrong, Evo helps you understand what that means in the context of your AI system, and gives you the ability to control it.

How to run Evo AI-SPM

The fastest way to see this in your own environment is simple with Evo AI-SPM, and in minutes, you can:

  • Identify where LiteLLM (and similar model gateways) exist across your repos

  • See which model providers and models are being routed through them

  • Discover connected tools, APIs, agents, and workflows

  • Uncover “shadow AI” that isn’t visible through traditional security tools

  • Apply policies to control what’s allowed moving forward

Most teams are surprised by what they find on the first scan because the reality is: if you’re building with AI, you already have an AI supply chain. You just might not be able to see it yet.

Snyk Open Source helps you find vulnerabilities.

Evo AI-SPM shows you the system and helps you secure it.

Start there.

You can’t govern AI you can’t see

Start with Discovery. Start with Evo AI-SPM.

Uncover every AI component hidden in your codebase and apply organization-wide governance.