Skip to main content

Articles

Stay informed on security insights and best practices from Snyk’s leading experts.

Anzeige 41 - 60 von 360 artikel

From Gatekeeper to Guardrail: Embracing the Role of Governance for the AI Era

AI code assistants demand a new AppSec governance model. Shift from late-stage "gatekeepers" to real-time "guardrails" with Policy-as-Code and developer-first security. Learn how to secure Al-generated code from inception.

Gemini Nano Banana Cheat Sheet for JavaScript Developers

Explore this cheat sheet for JavaScript/TypeScript developers on integrating Google's Gemini Nano Banana model. Master the AI SDK, prompt engineering, image generation, Data URL conversion, and security best practices with Snyk Studio.

Understanding Toxic Flows in MCP and the Hidden Risk of AI-Native Systems

A deep dive into toxic flows in MCP and how AI agents can unintentionally create attack paths across tools, data, and systems.

The Dissemination of the Term Vibe Coding

Vibe coding accelerates development but risks security. Learn how Andrej Karpathy's viral term describes Al-driven, minimal-oversight coding, and why it leads to XSS, SQL injection, and data leaks. Read the security implications and best practices.

NPM Security Best Practices: How to Protect Your Packages After the 2025 Shai Hulud Attack

Harden your npm environment against supply chain attacks like Shai-Hulud. Learn 12 essential best practices for developers and maintainers, covering post-install scripts, 2FA, provenance, and deterministic installs.

What is ASPM? (Application Security Posture Management)

Application security posture management (ASPM) overview - Learn how to strengthen app security using holistic visibility, automation & robust security measures.

Why AI-Native Apps Break Traditional AppSec Models

AI-native apps break traditional AppSec. Learn why dynamic models, agents, and model-layer risks demand a modern, AI-aligned security approach.

Fixing erkannter Schwachstellen mit Factory AI und Snyk Studio

Mehr Effizienz beim Schwachstellen-Fixing mit Factory AI Droids und Snyk MCP: Erfahren Sie, wie die Integration spezialisierter KI-Agents die Behebung von Sicherheitsproblemen und DevSecOps automatisiert.

Detecting & Patching Vulnerabilities with Continue and Snyk MCP

Integrate security directly into your AI coding workflow. Learn how to use the open-source Continue AI coding assistant with Snyk Studio's Model Context Protocol (MCP) to automatically detect, fix, and verify high-severity vulnerabilities like SQL Injection, all without leaving your IDE.

6 Key Components of a Robust AI Compliance Strategy

Ensure safe AI adoption and development with a robust AI compliance strategy. Explore the key components and how to prepare for evolving regulations here.

AI Threat Hunting: Transforming Cybersecurity Through Intelligent Automation

Discover how AI-driven threat hunting transforms cybersecurity by detecting hidden threats, automating analysis, and strengthening defense strategies against evolving cyberattacks.

Personalization in Vibe Coding

The rise of vibe coding and personalized AI agents is transforming development, but the 'Vibe Coding Hangover' introduces critical security and maintainability risks. Discover how to move from unreviewed, risky code to responsible, secure AI-assisted development.

What Is Threat Modeling and Why Is It Essential for DevSecOps?

Learn how continuous threat modeling strengthens DevSecOps by identifying, prioritizing, and mitigating risks across evolving code, data, and pipelines.

Defending Against Glassworm: The Invisible Malware That's Rewriting Supply Chain Security

Defend against Glassworm, the invisible malware rewriting supply chain security. Learn how anti-trojan-source detects and prevents these Unicode attacks, protecting your VS Code extensions and credentials.

What Users Want When Vibe Coding

Vibe coding promises speed but delivers tech debt & security risks. Developers need guardrails, not just velocity, to avoid production disasters, cost explosions, and skill erosion. Learn what users truly want for AI-assisted coding.

What Is Threat Modeling and Why Is It Essential for DevSecOps?

Learn how continuous threat modeling strengthens DevSecOps by identifying, prioritizing, and mitigating risks across evolving code, data, and pipelines.

AI in Ethical Hacking: Revolutionizing Cybersecurity Testing

AI in ethical hacking revolutionizes cybersecurity testing. Discover how AI transforms vulnerability assessment, penetration testing, and threat intelligence with cutting-edge tools and methodologies.

Evals for LLMs: Understanding Evaluation Systems for AI Models

Learn how Eval frameworks act like pen-tests for LLMs—helping cybersecurity teams assess resilience to adversarial attacks, ensure accuracy, manage risks, and integrate security into the AI lifecycle.

The Highs and Lows of Vibe Coding

"Vibe coding" with AI builds billion-dollar startups fast, but it also creates massive security risks. With 40% of AI code vulnerable and major data leaks emerging, explore the highs and lows of this trend and the path to securing it.

The Frictionless Developer Security Experience: Securing at the Speed of AI

Traditional security creates friction, slowing developers down. Learn how a frictionless approach embeds fast, AI-powered security and automated fixes into the dev workflow. Empower your teams to build securely without sacrificing development velocity.