Building AI Security with Our Customers: 5 Lessons from Evo’s Design Partner Program
April 1, 2026
0 mins readIn 2025, we embarked on a new journey to secure the most important technology transformation of this decade – generative AI.
Our vision is to help companies secure their AI fast, so that they can innovate on the cutting edge and put AI and agentic use cases into production. To do this, we built Evo, the world’s first agentic orchestrator for AI security.
The foundation of any product is customer needs. We turned to our 5000 customers and selected a panel of design partners to help us understand and focus on the highest-impact challenges they were trying to solve. Here are some of the top highlights we’ve learned from them during the last 12 months of building Evo.
Key takeaways
You can’t secure what you can’t see: AI sprawl and shadow AI are far bigger than most teams realize; discovery is the unlock.
Custom AI is the future: Detection must go beyond standard libraries to understand how teams build.
Spreadsheets don’t scale: Static tracking creates blind spots and a false sense of control.
Teams need a starting point for governance: Out-of-the-box, continuously enforced policies are critical to moving from chaos to control.
Risk intelligence is the missing layer: Without clear, actionable risk signals, teams can’t prioritize or act.
1. Visibility is the key to uncovering AI sprawl at AI speed
Shadow AI is bigger than CISOs realize, and visibility is the unlock. Most customers have no idea how many AI models, services, or agents are in use until they see it for themselves.
With Evo AI-SPM’s Discovery Agent, the “aha moment” comes in the first five minutes. A large retailer scanned 16k+ repos, uncovered 500+ models, and found a single team running 8 different versions of GPT-3. They “knew it was a problem—but had no visibility until now.”
Before Evo, producing a full AI inventory took 4–5 weeks, 10–12 stakeholders, and coordination across teams. Many tried spreadsheets: simple, familiar, but static. As AI adoption grows, spreadsheets break—missing shadow AI, hidden dependencies, and real-time behavior—giving a false sense of control and leaving gaps in governance, risk, and compliance.
Discovery isn’t table stakes—it is the product for this phase of the market. Evo automatically uncovers llmlite, agent skills, and emerging AI capabilities in real time, giving you visibility and control from day one.
CHEAT SHEET
AI Risk, Under Control with Evo AI-SPM
Discover how Evo AI-SPM helps you secure your agents, cut through AI sprawl, and govern AI with confidence.
2. Closing the gap requires tailored discovery
As organizations move beyond off-the-shelf AI and begin building custom, first-party agents to differentiate their products, they increasingly rely on internal wrappers around models, tools, and MCP servers to control behavior, integrate proprietary data, and enforce business logic. This shift is foundational; custom AI systems are becoming the future of how companies compete.
But it also exposes a critical gap: standard detection approaches that rely on known libraries and signatures are blind to these bespoke implementations. Multiple design partners surfaced this hard-to-anticipate limitation: they had built their own custom wrappers and libraries, making them invisible to standard detection tools.
This insight led the team to build Custom Discovery, a capability that learns from a customer's codebase to identify patterns specific to them, with a confidence score so users can approve or reject candidate detections.
Our design partners were critical here, as the feature uses customer code (even if isolated per tenant). This learning directly shaped one of the most differentiated AI-SPM capabilities.
3. CISOs need a scalable starting point for policy.
A single enterprise can have dozens of models in use, each with different risk profiles spanning code vulnerabilities, bias, safety, and data exposure, yet it lacks a scalable way to consistently evaluate them. As one CISO put it, “I feel like I’m flying a plane while we’re still building the cockpit instruments.” Teams are left choosing between slow, manual reviews or skipping assessment altogether, creating blind spots in governance.
To address this, we built Snyk Generated Policies: a set of out-of-the-box policies that automatically evaluate every model against the security risks that matter most. Rather than relying on static metadata or vendor claims, these policies are backed by continuous, real-world testing and a standardized Risk Index, allowing organizations to systematically surface which models require attention.
One VP of Security told us they would benefit from out-of-the-box policies just to have a starting point, while a compliance-focused AI firm, fresh out of its first AI governance board meeting, was actively seeking a “crawl, walk, run” approach to establishing controls.
The result is a shift from ad hoc, reactive reviews to consistent, scalable AI risk prioritization, aligned with how modern enterprises are actually deploying AI. This drove the GA investment in default policies and continuous/near-real-time policy evaluation, turning static governance documents into enforced controls.
4. Risk intelligence enables AI governance.
Across early adopters, a clear pattern emerged: once teams get past the "wow, I can see all my AI assets" moment, the immediate next question is always "what's actually risky about them?"
The challenge is novel. Unlike SCA or SAST, there's no CVE/CVSS equivalent for AI models. This led to the creation of the Risk Intelligence Agent: original security research into how AI models, agents, MCP servers, and more are vulnerable, built from scratch.
Delivering on that required multiple workstreams working in concert. We needed to mine open source vulnerability reports and translate them into reproducible, benchmarkable tests for models.
We needed to survey the landscape of existing benchmarks and classify them into distinct risk categories. On top of that, there's a significant data science effort to turn raw scores into actionable risk indices — the kind you can actually set policy against — and a data engineering pipeline to keep them up to date as the model ecosystem evolves.
Then there's deploying the whole thing as an autonomous agent that can assess risk.
Design partners reinforced the urgency from a different angle. They had AI components in their repos but couldn't find meaningful metadata for them, and without an out-of-the-box risk signal to react to, they couldn't even begin building policies. Risk Intelligence Agent was the answer.
5. Operational security extends to agents, MCPs, and enforcement.
As we iterate on our Discovery Agent with design partners, the vision that’s emerging is continuous visibility across agent intent, repos, endpoints, and ML pipelines. One VP of Engineering from a F500 payments firm flagged scale and consistency as the core problem: managing 10,000+ laptops and 80,000+ repos with no centralized AI control plane – a scale problem that Evo knows how to solve very well.
Design partners are looking to the Risk Intelligence Agent to assess an ever-growing list of threats. Every CISO and senior security leader is treating MCP as a significant new attack surface. Shadow AI is more prevalent than most leaders realize: unsanctioned models, MCP servers in tools like Cursor and Claude Desktop, and hidden cloud workloads.
Finally, the Policy Agent needs to govern AI, not just create issues. CI/CD pipeline blocking from the Policy Agent is in production as of AI-SPM GA. Design Partners are standing up formal AI governance structures and looking to vendors for help operationalizing them. A VP at a financial services company described legacy tools as "psychotically noisy" — he cares less about total issue counts and more about identifying actionable, critical risks that truly impact the business. This is the bar for the Policy Agent to cross.
What we learned from our design partners is clear
AI security is still in its earliest innings, and most organizations are building the plane while flying it. Visibility comes first, but it’s not enough. As teams move toward custom, first-party AI systems, traditional approaches to discovery, risk assessment, and governance break down.
The future of AI security will be defined by continuous discovery, real-time risk intelligence, and policies that are not just written, but enforced. That’s the foundation Evo is built on. AI security is a fundamentally new problem, and the only way to solve it is by building alongside our customers who are defining it in real time.
Excited about shaping the future of AI security? Become a Design Partner today.
Evo Agent Red Teaming - Experimental Preview
Become a Design Partner
Adversarially tests AI applications — so you can ship with confidence that your AI meets market security standards.
