Skip to main content

From Ideas to Impact: How the Bay Area Is Shaping the Future of Secure AI

Escrito por

6 de agosto de 2025

0 minutos de lectura

Generative AI is reshaping how software is made, secured, and scaled. At Snyk’s Lighthouse event in Silicon Valley, leaders from engineering, security, and platform teams gathered to explore one big question: How do we build AI-powered systems that move fast, without breaking trust?

For many, that future is already here — 60% of organizations at the Summit reported building agentic apps internally.

The answers weren’t just technical. They were cultural. Organizational. Strategic.

From agentic apps to real-time risk modeling, a few themes held strong across the day’s workshops, panels, and demos: visibility is power. Guardrails aren’t roadblocks. And trust isn’t a future phase. It’s the foundation.

The new Dev loop: Think → Build → Secure → Repeat

Speed and security aren’t at odds — they’re interdependent. And AI has shifted how the two interact. The traditional SDLC model doesn’t fit an environment where agents continuously generate, reason, and act.

Lighthouse sessions explored how development workflows must evolve for agentic AI. Think fewer static approvals and more dynamic, contextual feedback. Think real-time remediation, AI-literate security champions, and devs who don’t just build, but secure, their own AI assistants.

A standout moment came from a live demo of Cursor and Snyk. Without writing a line of code, Snyk's Senior Director of Developer Relations — Randall Degges — walked through a real-world workflow that scanned, tested, fixed, and shipped code — all while Snyk’s MCP server validated vulnerabilities, flagged risky patterns, and pushed safe updates through CI/CD pipelines. 

The impact? Security fixes that would’ve taken hours were reduced to minutes, without compromising quality or control.

Secure by design means secure by everyone

Shared accountability was a central theme of the day. AI is dissolving the boundaries between dev, platform, and security teams — meaning everyone’s in the loop now.

One speaker put it plainly: “If someone pastes a prompt into ChatGPT to build a workflow, they’re an app developer. And every developer is now a potential AI engineer.”

This changes how we define readiness.

Snyk’s AI Readiness Framework, introduced during the event, emphasized five pillars: visibility, ownership, secure design, cultural enablement, and continuous assurance. Attendees walked through real-world use cases to assess maturity in each area, mapping gaps and opportunities.

Key takeaways:

  • AI security isn’t a specialty — it’s a team sport. The best AI security engineers are often former app devs who understand the full context.

  • Guardrails, not gates. The goal isn’t to slow AI adoption — it’s to make it safer and faster.

  • Governance matters. From shadow agents to auto-enabled SaaS features, security teams must be at the table from the start.

One attendee summed it up perfectly: “Our biggest security risk isn’t GenAI. It’s treating GenAI like traditional software.”

AI Readiness Cheat Sheet

Build trust in AI

Get a practical, structured guide to help your team embrace AI without introducing unmitigated risk.

AI Trust means going beyond compliance

While frameworks like NIST’s AI RMF provide a high-level view, the Bay Area crowd wanted more than theory. They were looking for practical, proactive strategies.

The conversation moved quickly into topics like:

  • MCP discovery and validation: What models are running where? Who trained them? What data was used? Are they leaking sensitive info or behaving unpredictably in prod?

  • Red-teaming agent behavior: Beyond prompt injection and output fuzzing, how do we simulate adversarial interactions in real time?

  • Security inception: A new frontier for shifting left — integrating secure context at the very point of generation, not post-deployment.

What stood out wasn’t just the innovation, but the openness. Candid stories about mistakes, near misses, and unexpected gaps in visibility highlighted just how fast this space is evolving. And why building AI security must happen in parallel with building AI itself.

Training humans to secure the machines

If AI is everyone’s responsibility, how do we enable every role to act like it?

That question led to one of the day’s liveliest discussions: the future of AI security training and certification.

Several companies shared how they’re evolving their security champions programs to include AI-specific workflows and responsibilities. Others discussed forming AI governance councils with reps from security, legal, compliance, and even marketing.

A consistent theme? Culture is the multiplier. Upskilling teams only works if the organization embraces it. As one leader shared, “Security awareness can’t just be policy enforcement. It has to be choice engineering — how we nudge people to make better decisions, every day.”

What's now vs. what's next

The Bay Area event didn’t talk about future hype. It dealt with present-day transformation.

Yes, attendees saw a sneak peek of upcoming Snyk features. But the most valuable content came from peer discussions, hands-on demos, and the realization that:

  • Agentic systems are already in production. Whether internal or external, businesses are betting on AI to drive NPS, automate operations, and personalize services at scale.

  • Security can’t keep playing catch-up. From visibility gaps in SaaS integrations to unknown agents in the stack, the attack surface is exploding.

  • Innovation needs guardrails now. As one attendee from the medical tech industry put it, “You can’t wait until after something ships to ask about hallucinations. You need traceability and controls baked in from the start.”

people sitting at conference, listening to a speaker presenting closing remarks

The final word: AI is the future of software. Security must scale accordingly.

In the closing remarks, Snyk’s Chief Innovation Officer Manoj Nair brought it all together.

Security isn’t about saying “no” to AI. It’s about accelerating adoption responsibly — enabling developers, empowering platform teams, and building trust at every step of the AI SDLC.

“We saw it today,” he said. “Real AI systems coding, testing, and fixing vulnerabilities on their own. But none of it works without visibility, culture, and smart guardrails.”

The future isn’t DevSecOps 2.0. It’s something bigger. It’s developer-powered, AI-augmented, and trust-focused. And Snyk is building the platform to make it real.

October 22, 2025

DevSecCon2025 - The AI Security Summit

Secure your spot and secure the shift to AI native

Best practices for AI in the SDLC

Download this cheat sheet today to learn best practices for how to leverage AI in your SDLC, securely.