
Zack Kass
Global AI Advisor and Former Head of Go-To-Market, OpenAI
Software development is undergoing a seismic shift as AI transforms how we build, deploy, and secure applications. Register for the 1st-ever Global Community Summit on AI Security, covering critical strategies to empower AI innovation without compromising security.
Join us at DevSecCon25 for a global celebration of innovation in AI Security with inspiring keynotes, hands-on demos, and groundbreaking community-led research. If you’re heads down wrangling MCP servers, taming wild agents in production, and spotting people vibe-coding everywhere—and all of it has you worried about security— this conference is for you!
Leading experts in AI and security from around the world. See the full speakers list below.
Global AI Advisor and Former Head of Go-To-Market, OpenAI
Developers Education, WorkOS
Principal Developer Advocate, Qodo
Founder and CEO, Tessl
This isn’t your usual conference. Get ready for inspiring keynotes, hands-on sessions that actually stick, and a playground of fun — LIVE MUSIC and DJ sets, dev challenges, and games — all served with a generous side of developer frenzy. Check out our talk tracks and the full agenda below.
Join us on the main stage for inspiring keynotes from leaders in AI and cybersecurity. Expect forward-looking insights, industry thought leadership, and a vision of what’s next in the world of secure AI.
Bring your laptop and join us for interactive, hands-on demos on “Build and Secure with AI.” You'll leave with skills you can immediately apply.
Cutting-edge talks exploring the evolving security challenges of the AI era. Discover how to safeguard AI-driven applications, gain visibility into models, and secure agents across the SDLC.
Experience the latest advancements from Snyk in this dynamic track featuring live product demos, major announcements, and customer success stories.
Explore the full schedule of sessions, listed in Eastern Time (ET)— with must-see talks on the mainstage and in the breakouts. - Please note: The schedule may undergo slight adjustments based on session lengths. We recommend logging in early for any sessions you plan to attend.
Peter McKay
CEO - Snyk
AI is redefining what it means to be a developer. Join Peter McKay, CEO at Snyk, for an inspirational opening keynote where he will dive into how AI is reshaping the way software is designed, built, and maintained, highlighting the exciting opportunities it creates, as well as the critical risks teams must navigate.
Guy Podjarny
Founder & CEO - Tessl
Danny Allan
Chief Technology Officer - Snyk
Join Guy Podjarny and Danny Allan for a fireside chat exploring the future of software development. Together, they will share insights into how the industry is moving from a code-centric approach to a spec-centric paradigm, and explore the emerging new balance between Agent Experience and Developer Experience —and what this shift means for developers, innovation, and security.
Zach Proser
Developers Education - WorkOS
The developer of tomorrow won’t be chained to their desk. With AI-native tools, the boundaries of when and where we can create software are dissolving.
In this keynote, Zack Proser shares how his own daily workflow shows the shift. Mornings and afternoons begin at a MacBook Pro, where Wispr Flow converts his voice into code at 179 words per minute as he collaborates with Cursor Agents to build features.
However, the most surprising progress occurs midday, during a long walk through the woods. Using OpenAI’s advanced voice mode on his phone, Zack works in the peripatetic tradition — reasoning out architecture and clarifying features aloud, as oxygen and endorphins fuel creativity, while an intelligent agent captures and carries the context forward.
The result is a workflow where a significant portion of productive coding time happens away from the desk, yet remains secure and trustworthy. Agent sandboxes and CI/CD scans limit blast radius, pull requests are validated automatically, and even mobile voice calls are authenticated through trusted identity. Personal context from devices like the Oura Ring can further enrich AI systems, provided data flows are designed with strong guardrails.
This talk combines live examples with footage from the trail, providing both a glimpse into the near future of untethered AI-augmented development and a reflection on balancing freedom and security in our workflows.
Manoj Nair
Chief Innovation Officer - Snyk
Danny Allan
Chief Technology Officer - Snyk
Join Danny Allan, Chief Technology Officer, and Manoj Nair, Chief Innovation Officer at Snyk, to learn how Snyk helps secure development at AI speed and streamline AppSec governance.
Throughout history, we’ve embraced innovation when it clearly made life better—think fire, electricity, antibiotics, the internet. But today, this phenomenon has shifted. As AI races forward, the biggest barrier is no longer what technology can do, but what society is willing to accept. In this provocative keynote, Zack Kass explores the growing gap between technological possibility and societal readiness. Drawing from his experience at the forefront of AI’s evolution, Zack unpacks why cultural, ethical, and institutional resistance—not technical limitations—will define the pace of progress. Audiences will come away with a powerful framework for navigating resistance, rethinking innovation, and leading through one of the most pivotal moments in human history.
W. Ian Douglas
Staff Developer Advocate - Block
What happens when you ask your community for submissions, and you want to make sure nothing malicious gets into your code base? We solved this by having our AI system build its own security scanner for a community-driven project of "automation recipe" submissions. We created a fully automated, containerized security pipeline that analyzes GitHub pull requests in isolation using headless AI analysis and threat detection algorithms. Learn how to build your own blueprint for implementing AI-powered security automation in CI/CD pipelines. It transformed our manual review bottleneck into an automated trust-building machine that processes submissions in minutes instead of days, and it can help you, too.
Harshad Kadam
Sr Infrastructure Security Engineer - Indeed Inc
This session is for defenders, detection engineers, and curious red teamers exploring how Zero Trust meets deception engineering in the age of AI orchestration. We’ll break down how we built “MCP Threat Trap,” a honeypot that: simulates sensitive internal tools over the MCP protocol, with realistic delays, secure error handling, and SSE streams that mimic enterprise APIs, Silently triggers advanced Canarytokens, capturing rich metadata, runs entirely on Cloudflare’s global edge via Workers, with no EC2, patching, or infrastructure to manage-making it stealthy and instantly scalable and turns random scans into actionable intelligence, feeding Zero Trust policies and arming your incident team with context-rich alerts.
Stone Werner
Software Engineer - Ragie.ai
The Model Context Protocol (MCP) is quickly emerging as the open standard for connecting AI systems to external tools and data sources. Yet for most developers, getting an MCP server from prototype to production still feels like navigating uncharted territory. Authentication inconsistencies, fragmented SDKs, and unpredictable client behavior make even basic connectivity a challenge—while static, generic tool definitions limit what LLMs can actually do once connected.
In this talk, Stone walks through the real-world pitfalls of building and deploying multi-tenant MCP servers in production—and how Ragie has solved them. Topics include:
Surviving the 0→1 phase: overcoming OAuth headaches and client-specific quirks to achieve stable connectivity.
Taming the Wild West: understanding SDK fragmentation and designing for a moving target.
Making tools useful: using dynamic, context-aware descriptions (via Ragie’s open source Dynamic FastMCP) to help LLMs choose the right tools confidently.
Improving UX and security: designing an end-user experience that inspires trust and hardening against emerging threats like tool poisoning and prompt injection.
Attendees will leave with an understanding of what it truly takes to make MCP servers production-ready in multi-tenant environments—where stability, adaptability, and user trust matter just as much as protocol compliance. This is a technical deep dive for builders who want to go beyond demos and deliver real, reliable MCP infrastructure.
Nate Barbettini
Founding Engineer - Arcade.dev
Building Model Context Protocol servers is a powerful way to extend the capabilities of LLMs, but what happens when you go beyond "works on my machine"? Lurking behind every MCP demo is a complex set of security and authorization questions. In this talk, Nate will walk through how the latest evolution of the MCP spec makes fine-grained, multi-user auth possible (which he promises is more fun than it sounds). Along the way, he'll explain the best practices for securing MCP servers in production.
John McBride
Staff Engineer - Zuplo
In 2025, the fastest-growing user base is AI Agents. They autonomously interact with your system to extract data and perform operations. For some companies, this is a threat that needs to be controlled. For others, it’s an opportunity to allow customers to interact with your systems in a novel way.
In either case, you need to govern how agents interact with your platform. APIs will determine what resources AI has access to, how it can access that data, and what it can do with it. Your APIs and associated harnesses need to be understandable by agents, have enough features so they can accomplish their tasks, and be robust enough to handle automated traffic at scale.
Rene Brandel
Cofounder & CEO - Casco (YC X25)
We hacked 7 of the 16 publicly-accessible YC X25 AI agents. This allowed us to leak user data, execute code remotely, and take over databases. All within 30 minutes each. In this session, we'll walk through the common mistakes these companies made and how you can mitigate these security concerns before your agents put your business at risk.
Nnenna Ndukwe
Principal Developer Advocate - Qodo AI
It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.
Aamiruddin Syed
Supply Chain Software Security - AGCO Corp.
AI systems are rapidly shifting from static models to swarms of autonomous agents with their own identities, decisions, and access rights. Traditional identity and access management (IAM) systems built for humans and static service accounts can’t keep up. This talk will explore how agentic AI reshapes the identity landscape, where every agent may need its own verifiable, auditable, and revocable identity. Drawing from OWASP’s Top 10 Non-Human Identity Risks and the CSA’s agentic AI IAM framework, we’ll dive into new trust models, real-world attack vectors, and actionable strategies to keep autonomous systems accountable and secure.
Jeff Watkins
Chief Technology Officer - CreateFuture
Generative AI offers incredible opportunities but comes with significant cybersecurity challenges. As adoption accelerates, so do the risks—data theft, model manipulation, poisoned training data, operational disruptions, and supply chain vulnerabilities. This talk introduces the "STOIC" framework—Stolen, Tricked, Obstructed, Infected, Compromised—to help you identify and mitigate these threats. You'll gain valuable takeaways around: understanding your gen AI risks, hardening your systems, securing the supply chain, governing with clarity, and staying Agile. Generative AI is transformative but requires proactive, layered defences to avoid becoming a liability. With the right strategy, it can be a safe and game-changing tool for your organization.
Brett Smith
Distinguished Software Engineer - SAS
Can the MCP server help protect the electric sheep from rogue agents and bad actors, or are they just another way to attack them? This talk explores the new attack surface created by MCP servers and agentic AI, focusing on potential vulnerabilities and mitigation strategies. We will discuss how agentic AI can enhance the SDLC while also addressing the security risks it introduces. Learn about the role of MCP servers in managing these risks and provide strategies for securing them against potential attacks. Get answers to the following questions:
What does agentic AI in the SDLC look like?
What security risk does agentic AI bring to the SDLC?
How can MCP servers help with supply chain security?
What are the risks of using MCP servers?
What are strategies to mitigate attacks on MCP servers?
Jeff Andersen
Sr Director, Product Management - Snyk
Brendan Hann
Sr Product Marketing Manager - Snyk
Shifting left is more critical than ever. Join this session to learn how Snyk empowers developers to deliver quickly without sacrificing security or disrupting their workflow.
Itay Maor
Sr Manager Product Management - Snyk
Kate Powers
Sr Product Marketing Manager - Snyk
To prepare for the AI evolution, AppSec teams need their own evolution. Join this session to learn how the Snyk AI Trust Platform empowers AppSec teams with broad visibility and control, elevating AppSec from task managers to strategists.
Ryan McMorrow
Staff Product Manager - Snyk
Brendan Hann
Sr Product Marketing Manager - Snyk
This talk will explore open source and container remediation strategies as well as build a breakability score/trust frameworks to automate the remediation of open source issues that have a low risk of breakage.
Ezra Tanzer
Director Product Management - Snyk
Daniel Berman
Product Marketing Director - Snyk
Brendan Putek
Director of DevOps - Relay Network
AI code assistants are revolutionizing software creation, leading to an explosive increase in the volume and velocity of code written. This new paradigm introduces unprecedented opportunities for innovation but also creates a massive, hidden attack surface. With AI models often learning from flawed public data, how do you ensure the code they generate is secure without slowing your developers down? While traditional security gates remain important, relying on them to catch this new wave of AI-generated issues creates costly rework and consumes developer bandwidth. The real opportunity is to prevent these vulnerabilities from being created in the first place.
Snyk introduced the principle of "Secure at Inception" at Blackhat USA - a new approach that moves beyond reactive scanning to proactively embed security directly into the AI-driven workflow. Today, we are sharing how companies can deploy this capability at scale. We will demonstrate how developers can be guided to generate secure code from the very start, effectively taming the risks of the AI code revolution. Through a live demo, you'll see how Snyk provides real-time, frictionless security testing and automated fixes directly within an AI code assistant.
Join us to learn how to confidently embrace AI-driven development at enterprise scale. You'll leave with a clear understanding of how to secure this new SDLC, and you'll get a sneak peek into the future of autonomous security with a preview of Snyk's forthcoming remediation "Agent."
Rudy Lai
Director, Technology Incubation - Snyk
Securing the next generation of AI requires a new security paradigm. As agentic AI and AI-native applications grow in use, they expand the attack surface and introduce a host of novel risks, from prompt injection and toxic flows to the unpredictability of non-deterministic behavior. Traditional rules-based security tools, designed for determinism and parameterization, simply won’t suffice. To enable secure AI application innovation, we need a new breed of security teams, AI Security Engineers who leverage their own agentic security orchestration systems. This session will explore the core tasks these systems need to cover, and the opportunity presented by orchestration to see, understand, and address complex threats against AI-native apps.
Something big is brewing at DevSecCon25… our very first AI Security Developers Challenge!
Think fast-paced developer challenge, creative problem-solving, and a chance to team up with brilliant minds from around the globe. We’re keeping the details under wraps for now, but trust us—you won’t want to miss this. Register today to stay in the loop and snag your spot before it’s gone!
If you haven’t registered yet, this all-star lineup of speakers will do the convincing for you. From industry thought leaders to hands-on experts, they’re ready to inspire, challenge, and maybe even make you laugh along the way. Don’t miss your chance to learn from the best — secure your spot today!
Stone Werner
Software Engineer, Ragie.ai
Zack Kass
Global AI Advisor and Former Head of Go-To-Market, OpenAI
Zack Proser
Developers Education, WorkOS
Nnenna Ndukwe
Principal Developer Advocate, Qodo
Guy Podjarny
Founder and CEO, Tessl
John McBride
Staff Engineer, Zuplo
Nate Barbettini
Founding Engineer, Arcade.dev
W. Ian Douglas
Staff Developer Advocate, Block
Rene Brandel
Cofounder & CEO, Casco (YC X25)
Brett Smith
Distinguished Software Engineer, SAS
Jeff Watkins
Chief Technology Officer, CreateFuture
Harshad Sadashiv Kadam
Senior Infrastructure Security Engineer, Indeed Inc
Peter McKay
CEO, Snyk
Manoj Nair
Chief Innovation Officer, Snyk
Danny Allan
Chief Technology Officer, Snyk
Aamiruddin Syed
Supply Chain Software Security, AGCO Corporation
Brendan Putek
Director of DevOps, Relay Network
Jeff Andersen
Senior Director, Product Management, Snyk
Brendan Hann
Senior Product Marketing Manager, Snyk
Itay Maor
Senior Manager, Product, Snyk
Kate Powers
Senior Product Marketing Manager, Snyk
Ryan McMorrow
Staff Product Manager, Snyk
Ezra Tanzer
Director, Product Management, Snyk
Daniel Berman
Product Marketing Director, Snyk
Rudy Lai
Director, Technology Incubation • Emerging Technologies & Solutions, Snyk
Reserve your front-row seat today for a groundbreaking event exploring the frontier of building AI trust. Explore essential strategies and best practices to secure your shift to AI-native!
DevSecCon 2024 was packed with actionable insights and unforgettable sessions with industry leaders. Check out the highlights from last year’s virtual event and in-person roadshow to get inspired for this year. Stay tuned for more details on this year’s agenda!