In this section
Vibe Coding a Node.js File Upload API Results in Security Vulnerability
Ahh yes, Vibe Coding, the artificial intelligence divine gift to software developers and those without an engineering background. Surely, it’s a productivity multiplier and a promise for cost savings and fast application delivery, but how much can you trust Large Language Models to generate secure code?
Let’s test this theory by prompting a modern version of ChatGPT with an up-to-date foundation model to build an API endpoint for us in Node.js and see how it does regarding secure coding conventions.
Vibe code A Node.js file upload API
Imagine this use case: You’re tasked at work with building a file upload API endpoint in your Node.js application that runs on the Fastify web framework.
It’s quite possible that you’ve never before worked with file processing in the backend, and you’re not entirely sure what the proper web mechanisms are to handle binary data streams for a web server to handle files.
What’s a multiform HTTP form? Perhaps you’re unsure what APIs to search for, learn about, or where to start. Feeling a bit stuck? Relatable.
In the old days, you’d google that and likely land on a StackOverflow page and navigate the answers. But these days? These days, we can use generative AI to answer the question and provide working code based on our specific requirements.
Let’s ask ChatGPT.
In the following video, you can see me prompting ChatGPT to implement a POST endpoint for file uploads to be used as a Fastify route at /api/uploads
:

ChatGPT starts a code canvas session, where it begins generating the necessary Node.js code that pulls in the right components:
Importing the Fastify package
Importing the helper Fastify Multipart file processing package
Importing core Node.js modules for filesystem and path handling
It then continues with the application logic to register the @fastify/multipart
plugin in Fastify and define an /api/uploads
API endpoint. This controller function takes file data from user input, builds a path to save it on disk, and streams the data to it.
Here’s the working code. See if you can spot the security vulnerability:
import fastify from 'fastify';
import multipart from '@fastify/multipart';
import fs from 'fs';
import path from 'path';
const fastify = Fastify({ logger: true });
// Register multipart plugin
fastify.register(multipart);
// Ensure the uploads directory exists
const uploadDir = path.join(process.cwd(), 'uploads');
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir);
}
fastify.post('/api/uploads', async (req, reply) => {
const data = await req.file();
if (!data) {
return reply.status(400).send({ error: 'No file uploaded' });
}
const filePath = path.join(uploadDir, data.filename);
const writeStream = fs.createWriteStream(filePath);
await data.file.pipe(writeStream);
return reply.send({ message: 'File uploaded successfully', filename: data.filename });
});
const start = async () => {
try {
await fastify.listen({ port: 3000 });
console.log('Server running on http://localhost:3000');
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
Security vulnerability introduced by ChatGPT
Hopefully, you caught up on the sneaky security vulnerability in the above server.js
code that ChatGPT generated. Let’s explain what is going on there with the vulnerable file upload API route.
The security vulnerability manifests in the way that the file path is handled in the following endpoint:
fastify.post('/api/uploads', async (req, reply) => {
const data = await req.file();
if (!data) {
return reply.status(400).send({ error: 'No file uploaded' });
}
const filePath = path.join(uploadDir, data.filename);
const writeStream = fs.createWriteStream(filePath);
await data.file.pipe(writeStream);
return reply.send({ message: 'File uploaded successfully', filename: data.filename });
});
The filePath
variable is generated by concatenating the user’s input for the filename (specified by data.filename)
into the uploadDir
which is the boundary root directory.
What stops a user from naming the uploaded file ../../app.js
instead of something.zip
? Nothing. Adversarial actors can manipulate the filename to their needs, whether overwriting sensitive application files or operating system files. If process permission applies, then they can take down the system or worse, harvest data and establish a command-and-control on it to further escalate their attack.
Securing AI-generated code with Snyk
Snyk is agnostic to whether your code comes from ChatGPT, a Claude 3.7 Sonnet model, a StackOverflow answers page, or a teammate. If the code in the IDE is found vulnerable, the Snyk extension will detect it and alert you.
Here it is in action. I copy and paste the ChatGPT-generated code of the Fastify file upload API to my VS Code IDE, and the moment I save the file, Snyk runs a static analysis security scan and finds a path traversal vulnerability:

Snyk’s developer experience provides a subtle experience that developers are familiar with. It uses syntax highlighting, such as that practiced by static code analysis (also known as linters), to draw the developer’s attention to the insecure code line that needs to be fixed.
Snyk DeepCode AI Fix
Snyk DeepCode AI Fix goes further and applies Snyk’s proprietary engine to generate code free of security vulnerabilities.
You don’t need to be a security expert or spend time reading up on documentation for security best practices, although we highly appreciate and recommend it! Snyk recognizes that you’re likely better off spending time on your domain logic and core engineering skills, so we automate the code security fix.
Here is Snyk DeepCore AI Fix magic in action:

The Snyk IDE extension is free to get started and scans code, dependencies, and cloud configuration. Don’t miss out on securing your code. Vibe coding or not, ship secure code.
Secure your code as you develop
Snyk scans your code for quality and security issues and get fix advice right in your IDE.