Skip to main content
DevSecCon_2025_Logo_-_light_hor_year_1_if0eww

DevSecCon - AI Security Research

Watch the best of DevSecCon ‘25

DevSecCon 2025 virtual summit was a global celebration of innovation in AI Security with inspiring keynotes, hands-on demos, and groundbreaking community-led research. Dive into the sessions from the main stage below and join the global DevSecCon community to help shape the future of secure development.

AI security research track spotlight

Dive into cutting-edge talks exploring the evolving security challenges of the AI era. Discover how to safeguard AI-driven applications, gain visibility into models, and secure agents across the SDLC. Watch the recordings now.

How We Hacked YC Spring 2025 Batch’s AI Agents

Watch this session to learn how this team hacked 7 of the16 publicly-accessible YC X25 AI agents, allowing them to leak user data, execute code remotely, and take over databases. All within 30 minutes each. In this session, we’ll walk through the common mistakes these companies made and how you can mitigate these security concerns before your agents put your business at risk.

Red Teaming AI: How to Stress-Test LLM-Integrated Apps Like an Attacker

It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. 

Taming the AI Identity Explosion: Securing Agentic AI and Non-Human Identities

AI systems are rapidly shifting from static models to swarms of autonomous agents with their own identities, decisions, and access rights. Traditional identity and access management (IAM) systems built for humans and static service accounts can’t keep up. This talk will explore how agentic AI reshapes the identity landscape, where every agent may need its own verifiable, auditable, and revocable identity.

STOIC Security: Shielding Your Generative AI App from the Five Deadly Risks

Generative AI offers incredible opportunities but comes with significant cybersecurity challenges. As adoption accelerates, so do the risks—data theft, model manipulation, poisoned training data, operational disruptions, and supply chain vulnerabilities. This talk introduces the "STOIC" framework—Stolen, Tricked, Obstructed, Infected, Compromised—to help you identify and mitigate these threats.

Agents and MCP Servers: Are the electric sheep safe?

We have a new AI attack service. MCP servers are everywhere, and they are the new attack surface. Can the MCP server help protect the electric sheep from rogue agents and bad actors, or are they just another way to attack them?

Check out all the session tracks from DevSecCon 2025

Join our community

DevSecCon is where security enthusiasts learn, share, and shape the future of DevSecOps together.

Additional resources

Blog

Foundations of trust: Securing the future of AI-generated code

Learn about Snyk's incoming GenAI Partner Program and how it secures the code produced by AI coding assistants, ensuring developers can code faster and more securely.

Blog

SnykLaunch Oct 2024: Enhanced PR experience, extended visibility, AI-powered security, holistic risk management

Read a recap of our SnykLaunch event for October 2024, covering our new features that power a developer-first, risk-centric security experience.

Blog

Going beyond reachability to prioritize what matters most

While static reachability can help teams better understand their app vulnerabilities, they must be paired with other types of context and risk insights.