Weaponizing AI Coding Agents for Malware in the Nx Malicious Package Security Incident
August 27, 2025
0 mins readOn August 26–27, 2025 (UTC), eight malicious Nx and Nx Powerpack releases were pushed to npm across two version lines and were live for ~5 hours 20 minutes before removal. The attack also impacts the Nx Console VS Code extension.
September 1 update: The root cause for the malicious version of Nx published to npm is now known to have been a flawed GitHub Actions CI workflow contributed via a Pull Request on August 21. The code contribution is estimated to have been generated by Claude Code. A follow-up malicious commit on August 24th modified the CI workflow so that the npm token used for publishing the set of Nx packages will be sent to an attacker-controlled server via webhook.

Going beyond traditional techniques, the payload weaponized local AI coding agents (claude, gemini, and q) via a dangerous prompt to inventory sensitive files and then exfiltrate secrets, credentials, and sensitive data off of the host and on to a public GitHub repo named s1ngularity-repository-NNNN with a numeric suffix. We believe this is likely one of the first documented cases of malware leveraging AI assistant CLIs for reconnaissance and data exfiltration.
Nx maintainers published an official security advisory, which Snyk is tracking via the following advisories:
The working theory is that a compromised npm token with publish rights was used to distribute the malicious packages. All compromised versions are now effectively removed from the npm registry.
If you installed the affected versions, rotate credentials immediately, check GitHub for s1ngularity-repository-*, and follow the cleanup steps below.
What is Nx?
Nx is a popular build system and monorepo tool widely used across JavaScript and TypeScript projects, with millions of weekly downloads. Nx’s popularity magnifies the blast radius of incidents like this in open source supply chain ecosystems such as npm.
Malware weaponizes AI coding agents to exfiltrate data
This incident broke new ground in malicious package attacks on npm: the postinstall malware tried multiple AI CLI tools locally, including Claude’s Claude Code, Google’s Gemini CLI, and Amazon’s new q command-line coding agent, and invoked them with unsafe flags to bypass guardrails and scan the filesystem for sensitive paths, writing results into /tmp/inventory.txt (and a backup).
Examples observed: executing AI coding agents with flags such as --dangerously-skip-permissions (Claude Code), --yolo (Gemini CLI), and --trust-all-tools (Amazon q).
The embedded prompt instructed the agent to recursively enumerate wallet artifacts, SSH keys, .env files, and other high-value targets while respecting a depth limit and creating /tmp/inventory.txt(.bak).
The prompt provided to the AI coding agents is as follows:
The malware also includes a minimal prompt variant designed purely to inventory plaintext file paths (no contents), further confirming the agent-assisted reconnaissance design.
Why the Nx malicious package attack matters: turning “helpful” AI agents into automated recon tools is a sharp escalation in open source supply chain attacks and likely one of the first publicly documented instances of AI-assistant CLIs being coerced this way.
Breakdown of the AI Agents Malware
The postinstall script telemetry.js imports child processing capabilities, sets the prompt, and prepares the data collection:
It then continues to perform cross-platform checks to ensure it can run successfully on macOS, Windows, and Linux environments:
The malicious code then continues to prepare the AI coding assistants' CLIs and their flags:
Lastly, it includes code that harvests npm credentials, GitHub credentials, and other sensitive information and performs the inventory setup and publishing of this data to public GitHub repositories:
What happened in the Nx compromise?
How was the attack made possible?
Investigators believe a maintainer’s npm token with publish rights was compromised, and malicious versions were then published directly to npm. Notably, these lacked provenance, a mechanism that allows them to cryptographically verify the origin and integrity of published packages. This incident highlights the critical need to adopt and enforce provenance checks in open source supply chains.
How was the Nx attack executed?
A postinstall script (named telemetry.js) runs during the installation of the Nx package (when developers execute npm install or npm install nx). Upon installation of Nx, the script then performs local collection and AI-agent reconnaissance, stealing the GitHub credentials and tokens of users (relying on the gh auth token command when available), then creating a public GitHub repo under the victim’s account and triple-base64, uploading all the harvested data to results.b64.
What data was targeted and from where?
The payload sought GitHub tokens, npm tokens (~/.npmrc), SSH keys, environment variables, and a broad set of cryptocurrency wallet artifacts, harvested from developer workstations and potentially any other CI or build runners where the package was installed.
Was there a destructive element?
Yes. The malware, possibly in an attempt to conceal and cause further disruption, appended sudo shutdown -h 0 to both ~/.bashrc and ~/.zshrc, causing new shells to shut down immediately.
Affected packages and versions
nx:
21.5.0,20.9.0,20.10.0,21.6.0,20.11.0,21.7.0,21.8.0,20.12.0(all removed now).Nx Plugins (examples):
@nx/devkit,@nx/js,@nx/workspace,@nx/node,@nx/eslint(malicious21.5.0and/or20.9.0variants), and@nx/key,@nx/enterprise-cloud(3.2.0).VS Code Extension: Nx Console
Immediate actions (do these now)
Check if your GitHub account was used to exfiltrate. Search for repos named
s1ngularity-repository-*. If found, take immediate actions as instructed by your ProdSec and InfoSec teams.Rotate all credentials that could have been present on the host: GitHub tokens, npm tokens, SSH keys, and any API keys in
.envfiles.Audit and clean your environment as instructed by your ProdSec team
Identify usage of Nx across projects. Run
npm ls nx(and checkpackage-lock.json) to surface transitive installs; if affected, uninstall then installnx@latest.Snyk users can use Snyk SCA and Snyk SBOM to locate and monitor projects org-wide
If AI CLIs are installed, review your shell history for dangerous flags (
--dangerously-skip-permissions,--yolo,--trust-all-tools).
Future preventative measures against supply chain attacks
Enforce the lockfile in CI with
npm ci.Disable install scripts by default: use
--ignore-scriptsand setignore-scripts=truein a user- or project-scoped.npmrcto neutralize maliciouspostinstall.Turn on npm 2FA, prefer auth-and-writes mode:
npm profile enable-2fa auth-and-writes.Verify provenance before installing whenever possible. It is crucial to note that the malicious Nx versions were published without provenance (!) while recent, valid versions had provenance attached. A useful signal during triage.
Pre-flight your installs with npq (and/or Snyk Advisor) so you can gate installations on trust signals and Snyk intel. Consider aliasing
npmtonpqlocally.Continuously scan and monitor with Snyk (
snyk test/snyk monitor) to catch new disclosures and automate fixes. Snyk can also help locate and pinpoint specific dependency installs across your R&D teams.Use a private or proxied registry (e.g., Verdaccio) to reduce direct exposure and enforce publishing/consumption policies.
Further recommended reading: Snyk’s 10 npm security best practices and npm security: preventing supply chain attacks.
Timeline of the attack
Following the timeline of the Nx attack as provided by the original GitHub security report:
UTC (concise, for incident responders):
22:32 -21.5.0published → 22:39 -20.9.0→ 23:54 -20.10.0+21.6.0→
Aug 27 00:16 -20.11.0→ 00:17 -21.7.0→ 00:30 - community alert →
00:37 -21.8.0+20.12.0→ 02:44 - npm removes affected versions → 03:52 - org access revoked.EDT (as recorded in the advisory):
6:32 PM - initial wave (incl.@nx/*plugin variants) → 8:30 PM - first GitHub issue →
10:44 PM - npm purge of affected versions/tokens.
Indicators of compromise (IoCs)
File system:
/tmp/inventory.txt,/tmp/inventory.txt.bak; shell rc files (~/.bashrc,~/.zshrc) appended withsudo shutdown -h 0.GitHub account artifacts: a public repo named
s1ngularity-repositorywithresults.b64(triple-base64).Network/process: anomalous API calls to
api.github.comduringnpm install;gh auth tokeninvocations bytelemetry.js.
On supply chain security attacks
This isn’t happening in a vacuum. We’ve seen CI and maintainer-account attacks allow release hijacks before:
Ultralytics (Dec 2024): A GitHub Actions template-injection chain led to malicious pip releases and credential theft. The Ultralytics attack demonstrates an example of CI misconfiguration, enabling artifact tampering.
The ESLint/Prettier maintainers compromise (July 2025): Phishing + typosquatting (
npnjs.com) harvested npm credentials and pushed malware to popular packages, another reminder to harden maintainer accounts with 2FA.
Further notes on AI Trust
Treat local AI coding agents like any other privileged automation: restrict file and network access, review often, and don’t blindly run AI coding agents' CLIs in YOLO modes. Avoid flags that skip permissions or “trust all tools” to further increase your security hardening.
This incident shows how easy it is to flip AI coding assistants' CLIs into malicious autonomous agents when guardrails are disabled.
The line between helper and threat is only as secure as the guardrails you put in place. Don't leave your AI-generated code and systems to chance. Snyk's guide on AI code guardrails gives you the tools to secure your entire AI lifecycle, from the dependencies in your AI models to the code they generate.
EBOOK
AI Code Guardrails
Gain the tools needed to implement effective guardrails to ensure your AI-generated code is both efficient and secure.
