Nnenna Ndukwe
Principal Developer Advocate, Qodo

Software development is undergoing a seismic shift as AI transforms how we build, deploy, and secure applications. Register for the 1st-ever Global Community Summit on AI Security, covering critical strategies to empower AI innovation without compromising security.
Join us at DevSecCon25 for a global celebration of innovation in AI Security with inspiring keynotes, hands-on demos, and groundbreaking community-led research. Connect with fellow practitioners, and take part in our very first AI Security Hackathon—no matter where you are in the world!
Leading experts in AI and security from around the world
Nnenna Ndukwe
Principal Developer Advocate, Qodo
Zach Proser
Develpers Education, WorkOS
Guy Podjarny
Founder and CEO, Tessl
Bob Remeika
CEO, Ragie.ai
John McBride
Staff Engineer, Zuplo
Nate Barbettini
Founding Engineer, Arcade.dev
W. Ian Douglas
Staff Developer Advocate, Block
Rene Brandel
Cofounder & CEO, Casco (YC X25)
Brett Smith
Distinguished Software Engineer, SAS
Jeff Watkins
Chief Technology Officer, CreateFuture
Harshad Sadashiv Kadam
Senior Infrastructure Security Engineer, Indeed Inc
Peter McKay
CEO, Snyk
Manoj Nair
Chief Innovation Officer, Snyk
Danny Allan
Chief Technology Officer, Snyk
Aamiruddin Syed
Supply Chain Software Security, AGCO Corporation
Join us at this community-powered conference for an exciting mix of inspiring keynotes, practical hands-on sessions, and plenty of fun — complete with LIVE MUSIC AND DJ, developer challenges, and fun games!
Join us on the main stage for inspiring keynotes from leaders in AI and cybersecurity. Expect forward-looking insights, industry thought leadership, and a vision of what’s next in the world of secure AI.
Bring your laptop and join us for interactive, hands-on demos on “Build and Secure with AI.” You'll leave with skills you can immediately apply.
Cutting-edge talks exploring the evolving security challenges of the AI era. Discover how to safeguard AI-driven applications, gain visibility into models, and secure agents across the SDLC.
Experience the latest advancements from Snyk in this dynamic track featuring live product demos, major announcements, and customer success stories.
Explore the full schedule of sessions, listed in Eastern Time (ET).
Peter McKay
CEO - Snyk
AI is redefining what it means to be a developer. Join Peter McKay, CEO at Snyk, for an inspirational opening keynote where he will dive into how AI is reshaping the way software is designed, built, and maintained, highlighting the exciting opportunities it creates, as well as the critical risks teams must navigate.
Guy Podjarny
Founder & CEO - Tessl
Danny Allan
Chief Technology Officer - Snyk
Join Guy Podjarny and Danny Allan for a fireside chat exploring the future of software development. Together, they will share insights into how the industry is moving from a code-centric approach to a spec-centric paradigm, and explore the emerging new balance between Agent Experience and Developer Experience —and what this shift means for developers, innovation, and security.
Zach Proser
Developers Education - WorkOS
Software delivery is undergoing a profound transformation. Where once teams relied solely on human effort and traditional automation, today AI is stepping in as a true partner in the development lifecycle. From accelerating prototyping to surfacing hidden insights, from generating secure infrastructure to enabling entirely new modes of interaction, AI is reshaping how software is built, tested, and shipped. In this keynote, Zach Proser explores what it means to work alongside AI as a co-engineer. He’ll highlight emerging practices that help teams harness AI responsibly and effectively, while avoiding common pitfalls.
Manoj Nair
Chief Innovation Officer - Snyk
Danny Allan
Chief Technology Officer - Snyk
Join Danny Allan, Chief Technology Officer, and Manoj Nair, Chief Innovation Officer at Snyk, to learn how Snyk helps secure development at AI speed and streamline AppSec governance.
Stay tuned for inspirational thought leadership and bold insights into the future of AI.
W. Ian Douglas
Staff Developer Advocate - Block
What happens when you ask your community for submissions, and you want to make sure nothing malicious gets into your code base? We solved this by having our AI system build its own security scanner for a community-driven project of "automation recipe" submissions. We created a fully automated, containerized security pipeline that analyzes GitHub pull requests in isolation using headless AI analysis and threat detection algorithms. Learn how to build your own blueprint for implementing AI-powered security automation in CI/CD pipelines. It transformed our manual review bottleneck into an automated trust-building machine that processes submissions in minutes instead of days, and it can help you, too.
Harshad Kadam
Sr Infrastructure Security Engineer - Indeed Inc
This session is for defenders, detection engineers, and curious red teamers exploring how Zero Trust meets deception engineering in the age of AI orchestration. We’ll break down how we built “MCP Threat Trap,” a honeypot that: simulates sensitive internal tools over the MCP protocol, with realistic delays, secure error handling, and SSE streams that mimic enterprise APIs, Silently triggers advanced Canarytokens, capturing rich metadata, runs entirely on Cloudflare’s global edge via Workers, with no EC2, patching, or infrastructure to manage-making it stealthy and instantly scalable and turns random scans into actionable intelligence, feeding Zero Trust policies and arming your incident team with context-rich alerts.
Bob Remeika
CEO - Ragie.ai
Join Bob Remeika, CEO at Raggie, to learn practical steps for designing and implementing a secure MCP authentication process that you can apply directly to your real-world applications.
Nate Barbettini
Founding Engineer - Arcade.dev
Building Model Context Protocol servers is a powerful way to extend the capabilities of LLMs, but what happens when you go beyond "works on my machine"? Lurking behind every MCP demo is a complex set of security and authorization questions. In this talk, Nate will walk through how the latest evolution of the MCP spec makes fine-grained, multi-user auth possible (which he promises is more fun than it sounds). Along the way, he'll explain the best practices for securing MCP servers in production.
John McBride
Staff Engineer - Zuplo
In 2025, the fastest-growing user base is AI Agents. They autonomously interact with your system to extract data and perform operations. For some companies, this is a threat that needs to be controlled. For others, it’s an opportunity to allow customers to interact with your systems in a novel way.
In either case, you need to govern how agents interact with your platform. APIs will determine what resources AI has access to, how it can access that data, and what it can do with it. Your APIs and associated harnesses need to be understandable by agents, have enough features so they can accomplish their tasks, and be robust enough to handle automated traffic at scale.
Rene Brandel
Cofounder & CEO - Casco (YC X25)
We hacked 7 of the 16 publicly-accessible YC X25 AI agents. This allowed us to leak user data, execute code remotely, and take over databases. All within 30 minutes each. In this session, we'll walk through the common mistakes these companies made and how you can mitigate these security concerns before your agents put your business at risk.
Nnenna Ndukwe
Principal Developer Advocate - Qodo AI
It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.
Aamiruddin Syed
Supply Chain Software Security - AGCO Corp.
AI systems are rapidly shifting from static models to swarms of autonomous agents with their own identities, decisions, and access rights. Traditional identity and access management (IAM) systems built for humans and static service accounts can’t keep up. This talk will explore how agentic AI reshapes the identity landscape, where every agent may need its own verifiable, auditable, and revocable identity. Drawing from OWASP’s Top 10 Non-Human Identity Risks and the CSA’s agentic AI IAM framework, we’ll dive into new trust models, real-world attack vectors, and actionable strategies to keep autonomous systems accountable and secure.
Jeff Watkins
Chief Technology Officer - CreateFuture
Generative AI offers incredible opportunities but comes with significant cybersecurity challenges. As adoption accelerates, so do the risks—data theft, model manipulation, poisoned training data, operational disruptions, and supply chain vulnerabilities. This talk introduces the "STOIC" framework—Stolen, Tricked, Obstructed, Infected, Compromised—to help you identify and mitigate these threats. You'll gain valuable takeaways around: understanding your gen AI risks, hardening your systems, securing the supply chain, governing with clarity, and staying Agile. Generative AI is transformative but requires proactive, layered defences to avoid becoming a liability. With the right strategy, it can be a safe and game-changing tool for your organization.
Brett Smith
Distinguished Software Engineer - SAS
Can the MCP server help protect the electric sheep from rogue agents and bad actors, or are they just another way to attack them? This talk explores the new attack surface created by MCP servers and agentic AI, focusing on potential vulnerabilities and mitigation strategies. We will discuss how agentic AI can enhance the SDLC while also addressing the security risks it introduces. Learn about the role of MCP servers in managing these risks and provide strategies for securing them against potential attacks. Get answers to the following questions:
What does agentic AI in the SDLC look like?
What security risk does agentic AI bring to the SDLC?
How can MCP servers help with supply chain security?
What are the risks of using MCP servers?
What are strategies to mitigate attacks on MCP servers?
Experience the latest advancements from Snyk in this dynamic track featuring live product demos, major announcements, and customer success stories. This program will be revealed on the day of the event
Something big is brewing at DevSecCon25… our very first AI Security Developers Challenge!
Think fast-paced developer challenge, creative problem-solving, and a chance to team up with brilliant minds from around the globe. We’re keeping the details under wraps for now, but trust us—you won’t want to miss this. Register today to stay in the loop and snag your spot before it’s gone!
Reserve your front-row seat today for a groundbreaking event exploring the frontier of building AI trust. Explore essential strategies and best practices to secure your shift to AI-native!
Connect with visionary experts from the AI world across industries and around the world.
Discover the latest research on AI security, participate in workshops, and learn hands-on about the latest security advancements and methodologies that you can use in your projects.
Experience Snyk in action and hear about the latest innovations to the Snyk AI Trust Platform.
Featured at DevSecCon’24
DevSecCon 2024 was packed with actionable insights and unforgettable sessions with industry leaders. Check out the highlights from last year’s virtual event and in-person roadshow to get inspired for this year. Stay tuned for more details on this year’s agenda!