Skip to main content
devseccon-logo-1

DevSecCon - AI Security Track

Watch the best of DevSecCon ‘24

 DevSecCon 2024 virtual summit was packed with DevSecOps lessons and hands-on experiences from industry trailblazers. Dive into the sessions from the Snyk track below and join the global DevSecCon community to help shape the future of secure development.

Developing AI trust

Explore the critical intersection of AI and application security. Watch the recordings now.

AI in the Wild

Securing AI systems in real-world deployments

Discuss the unique challenges of deploying AI systems in real-world environments, including ensuring reliable performance and defending against AI-specific threats. As AI becomes more complex and autonomous, its unpredictability grows, making security a critical concern. Drawing on insights from our experience, including operating the world’s largest AI Red Team (Gandalf), explore the evolving threat landscape and the security implications of AI in production, covering issues such as prompt injection attacks, data loss, and the risks posed by the democratization of AI capabilities.

Secure the AI

Protect the electric sheep

Explore the new security challenges AI introduces across the software supply chain, SDLC, and for developers and architects. We will examine key attack vectors in the supply chain and map them to the OWASP Top 10 for Large Language Models while assessing their impact on CI/CD pipelines. Viewers will gain insights into the vulnerabilities posed by AI in software development and learn actionable strategies to mitigate these risks, protect their pipelines, and safeguard both software and customers from evolving security threats.

Integrating AI Safety

Automating AI failure mode testing in devSecOps pipelines

Explore the motivation for integrating AI into services for productivity and cost reduction. Address the security and trust challenges that arise from embedding AI in applications. Traditional testing often falls short. Discuss how targeted strategies and automated processes for testing AI models can mitigate these risks. Demonstrate how automating the testing of AI failure modes can ensure secure and reliable AI-enhanced applications, allowing organizations to seamlessly integrate AI without exposing their systems to vulnerabilities.

Check out all the session tracks from DevSecCon 2024

default-video

On-Demand

DevSecCon 2024 Main stage

View Now
default-video

On-Demand

Open source security track

View Now
default-video

On-Demand

Security culture and education track

View Now

Additional resources

blog-feature-ai-pink
Blog

Foundations of trust: Securing the future of AI-generated code

Learn about Snyk's incoming GenAI Partner Program and how it secures the code produced by AI coding assistants, ensuring developers can code faster and more securely.

Feature_-_SnykLaunch_1
Blog

SnykLaunch Oct 2024: Enhanced PR experience, extended visibility, AI-powered security, holistic risk management

Read a recap of our SnykLaunch event for October 2024, covering our new features that power a developer-first, risk-centric security experience.

wordpress-sync/blog-feature-toolkit
Blog

Going beyond reachability to prioritize what matters most

While static reachability can help teams better understand their app vulnerabilities, they must be paired with other types of context and risk insights.