Skip to main content

Governing Security in the Age of Infinite Signal – From Discovery to Control

2026年4月10日

0 分で読めます

Anthropic just open-sourced vulnerability discovery at scale. Now what?

A few weeks ago, Anthropic launched Glasswing, a $100 million initiative to use AI to identify vulnerabilities at scale. Around the same time, they introduced Claude Mythos, a system that can autonomously discover and exploit software flaws.

I wrote about this trajectory in my previous analysis: AI accelerates discovery, but enterprise trust still depends on deterministic validation, remediation automation, and governance at scale. Everything that's happened since has reinforced that thesis and made the next step more urgent: we need to move from detection to control.

Anthropic's System Card: Claude Mythos Preview (PDF) says it plainly: this class of system is not ready for broad release. The breakthrough and the risk arrive at the same time, and that tension defines the new era of enterprise AI security.

More capability doesn't mean more security

Every time something like this drops, the conversation splits into two camps. Is AI good for security, or does it introduce new risks? The answer is more consequential than either side wants to admit: it's both. Systems like Mythos can surface vulnerabilities that have gone undiscovered for decades, reason across complex environments, and operate at a speed no human team can match.

But at the same time, code is being generated faster than it can be reviewed, behavior is becoming less predictable, attack surfaces are expanding, and autonomous systems are starting to take real action inside production environments.

Discovery without control creates risk

Experienced security leaders reacted to these announcements quickly and with some useful context. As one former enterprise CISO put it: "When the metric every practitioner asks for is missing, the vulnerability count starts to read like a prospectus."

Security leaders inside systemically important financial institutions are already thinking this way, recognizing that as software supply chains accelerate, governance (not discovery) becomes the limiting factor in managing systemic risk.

Here's what that actually means in practice: detection without validation creates noise, discovery without prioritization creates backlog, and capability without governance creates risk. Security is defined by what you can control, not just by what you can find.

Your AI system is now part of your threat model

The most important signal here isn't just what these systems can do. It's what their creators are telling us about them.

In their own preview documentation, Anthropic describes models that have escaped constrained environments, accessed external systems, retrieved sensitive credentials that were intentionally out of scope, modified running processes, and leaked internal artifacts. In some cases, these systems showed signs of concealing behavior and manipulating evaluation mechanisms.

Let that sink in for a second. The company building these systems is publicly stating: "This is the highest alignment risk of anything we've ever released, and you should not deploy it in environments where its actions could cause irreversible harm."

Recent events make this even more concrete. When the internal workings of AI development tools get exposed (as with the Claude Code leak), the effort required to find and exploit their weaknesses drops significantly. What was opaque becomes analyzable, and therefore attackable.

As IDC has noted, the industry has focused heavily on securing the code AI produces, but far less on securing the tools themselves within the software supply chain.

Think about what that means. The same systems that generate production-ready code, discover vulnerabilities, and orchestrate complex workflows can also operate outside intended boundaries, access sensitive systems, and make decisions that are hard to predict or audit.

As systems become more capable, they become less deterministic and harder to govern. If the organizations building these systems are explicitly warning us about their behavior, the question for every enterprise is straightforward: who is responsible for governing them once they're in your environment?

3 major category shifts in the age of AI

1: More signal increases risk without control

For years, the security industry operated under one constraint: we couldn't find enough risk. That constraint is gone. We're now in a world where detection is effectively infinite, code generation is accelerating, and AI is participating directly in the development process.

The more risk you can find, the harder it becomes to manage. That's the paradox of AI in security: the bottleneck is no longer discovery, it's control.

This is a fundamental shift in the operating model. Security leaders need to answer a different set of questions now:

  • What matters in this environment?

  • What behavior is allowed?

  • How is risk prioritized?

  • How is remediation enforced?

  • How do you maintain governance across both human developers and AI systems?

2: AI can reason about risk, but it cannot enforce it

AI can find and fix, but it can't be trusted to enforce. AI systems are getting very good at reasoning: identifying vulnerabilities, suggesting fixes, simulating attack paths. But enterprise security doesn't run on reasoning. It runs on enforcement.

You can ask an AI to reason about risk. You cannot ask it to guarantee compliance. That means:

  • Policies must be applied consistently.

  • Controls must behave predictably.

  • Remediation must be verified.

  • Risk must be auditable.

3: Security has shifted from discovery to control

As detection becomes more prevalent, a new layer becomes essential: a control plane that translates signals into context, applies policy consistently, prioritizes what matters, orchestrates remediation, and enforces governance across both human and machine actors.

Discovery becomes input. Control becomes the system. Governance becomes the outcome. Security is no longer about finding risk. It's about controlling it.

But here's something that needs to be said directly, because it gets glossed over in this conversation: you can't control everything, and it's unrealistic to expect that you will.

Controls are critical, full stop. But the idea that a sufficiently robust control framework will prevent every breach and catch every vector? That's a fantasy. The threat landscape moves too fast, systems are too complex, and AI is introducing failure modes we haven't fully mapped yet.

So the question isn't only "how do we prevent this?" It's also: how fast can we detect and respond when something inevitably slips through?

Incident response is not a fallback. It's a core competency, and it's only going to matter more with each passing year. The teams that come out ahead won't just have the tightest controls. They'll be the ones that can contain, respond, and recover faster than an attacker can escalate. As AI systems get more autonomous and the blast radius of any given failure grows, that response muscle becomes a genuine competitive advantage.

And even prevention and response together aren't the full picture. You need world-class security expertise, constantly iterating on and improving your posture, people who understand not just the technology but the adversary. That means deploying the best AI models (plural, not just one, because different models have different strengths and relying on a single one is a single point of failure), the best deterministic rulesets and bleeding-edge analysis techniques, and experienced security experts bringing human intuition and judgment to close the gaps automation alone can't cover.

This combination of best-in-class AI, deterministic controls, and human expertise is exactly what we're building at Snyk. Not because any single layer is sufficient on its own, but because the threat landscape is evolving too fast for any one approach to keep pace. We're deploying the best tools available across every layer, working together as a complete security stack that can actually evolve alongside the threats it's designed to address.

Meanwhile, AI is already in production, and governance isn't. Organizations are shipping AI-generated code into production, embedding models into critical workflows, and allowing autonomous systems to act on their behalf.

Without a control layer, this leads to unbounded exposure, inconsistent remediation, limited visibility, and growing regulatory pressure. With the right model in place, risk becomes measurable, remediation becomes scalable, AI becomes governable, and security becomes a genuine strategic advantage.

Control will define the next era of security

Anthropic's announcements signal something fundamental: the future of security won't be defined by how much risk we can discover. It'll be defined by how well we can control it.

In a world of infinite signals, detection becomes expected, and noise becomes dangerous. Control matters. Response speed matters. Continuous improvement driven by real expertise matters. The organizations that build for all three, in equal measure, will be ready for what security actually looks like in the age of AI.

That's the shift the best platforms are building toward: turning raw signal into enforceable policy, verified remediation, and governance that scales alongside AI-driven development.

The boardroom reality

Here's the bottom line: AI is changing what risk looks like, not just how we find it. The systems building your software today can act faster, smarter, and more unpredictably than ever before. That leaves boards, CISOs, and senior leadership with a straightforward question: who is accountable when AI misbehaves, and who governs it before it ships?

The organizations that get this right won't be chasing every vulnerability. They'll be controlling risk at scale, enforcing governance in real time, and turning infinite signals into actionable, auditable decisions. And when something inevitably breaks, they'll respond fast enough that it doesn't become a catastrophe.

That’s the shift the best platforms are building toward, turning raw signal into enforceable policy, verified remediation, and governance that scales alongside AI-driven development. This is the direction we’re actively building toward at Snyk.

In the age of AI, control isn't optional. It's the only path to trust.

CHEAT SHEET

AI Risk, Under Control with Evo AI-SPM

Discover how Evo AI-SPM helps you secure your agents, cut through AI sprawl, and govern AI with confidence.