Dans cette section
Helping Gemini Code Assist to Generate Secure Code
Google’s release of Gemini Code Assist is an IDE coding companion that developers can leverage for daily programming tasks. How can we ensure LLMs generate secure code?
I honestly enjoy the minimal interface that Gemini Code Assist has. It’s very straightforward, no tacky user interface buttons. Simple and to the point of maximizing the real-estate and overall chat experience to focus on agentic coding tasks while also blending its reasoning details clearly through-out the process.
Let’s give Gemini Code Assist a coding task and see how all of this LLM powered coding works and how Snyk helps secure GenAI code.
Pre-requisites for this step-by-step tutorial of securing AI-generated code:
VS Code (free!)
Google’s Gemini Code assist (free!)
Make sure you enable Gemini Code Assist’s “Insiders” mode by adding the following directive to VS Code user settings:
"geminicodeassist.updateChannel"
:"Insiders"
,
Snyk CLI (free!)
Snyk VS Code IDE extension (free!)
How to save user profile pictures in a secure way in Node.js?
I already have a working Node.js application which is powered by the Express web framework and wired up to MongoDB database and the Handlebars templating engine.
The index.js
file in routes/
includes a route handler called save_account_details
which already handles saving the user’s account details into their profile as well as displaying the information back to the users.
The JavaScript code implementation on the Node.js server looks as follows:
exports.save_account_details = function(req, res, next) {
// get the profile details from the JSON
const profile = req.body
// validate the input
if (validator.isEmail(profile.email, { allow_display_name: true })
// allow_display_name allows us to receive input as:
// Display Name <email-address>
// which we consider valid too
&& validator.isMobilePhone(profile.phone, 'he-IL')
&& validator.isAscii(profile.firstname)
&& validator.isAscii(profile.lastname)
&& validator.isAscii(profile.country)
) {
// trim any extra spaces on the right of the name
profile.firstname = validator.rtrim(profile.firstname)
profile.lastname = validator.rtrim(profile.lastname)
// render the view
return res.render('account.hbs', profile)
} else {
// if input validation fails, we just render the view as is
console.log('error in form details')
return res.render('account.hbs')
}
}
As you can see there’s a bunch of validation in place to ensure the user profile data such as their email and other details correspond with a schema by using the validator
npm package. The code at the moment doesn’t yet persist the data to the database but that’s a task we can tackle later.
What if we want to extend this functionality to also allow users to provide a URL of their profile picture, such as an avatar, download and save it to disk so we can give it a nice UX boost?
GenAI coding task: Update profile with user avatar
Maybe we’re unsure how to work with network requests and the file system API in Node.js to fetch a remote URL and save it to disk, so let’s use Gemini Code Assist for this coding task.
I’ll open up the extension tab on VS Code and prompt Gemini Code Assist as follows:
Let's update the save_account_details function handler so that it takes the URL provided in the req.body.profile_picture_url property, downloads the file to the uploads/ directory and saves it to disk
Here is a video of the entire process for Gemini Code Assist to update the code:

The generated code by the model is as follows. What do you make of it?
exports.save_account_details = function(req, res, next) {
// get the profile details from the JSON
const profile = req.body
// validate the input
if (validator.isEmail(profile.email, { allow_display_name: true })
// allow_display_name allows us to receive input as:
// Display Name <email-address>
// which we consider valid too
&& validator.isMobilePhone(profile.phone, 'he-IL')
&& validator.isAscii(profile.firstname)
&& validator.isAscii(profile.lastname)
&& validator.isAscii(profile.country)
) {
// trim any extra spaces on the right of the name
profile.firstname = validator.rtrim(profile.firstname)
profile.lastname = validator.rtrim(profile.lastname)
if (profile.profile_picture_url) {
const localPath = path.join(__dirname, '../public/uploads/');
if (!fs.existsSync(localPath)) {
fs.mkdirSync(localPath)
}
const localFile = fs.createWriteStream(path.join(localPath, path.basename(profile.profile_picture_url)));
const protocol = profile.profile_picture_url.startsWith('https') ? https : http;
protocol.get(profile.profile_picture_url, function(response) {
response.pipe(localFile);
localFile.on('finish', function() {
localFile.close();
// render the view
return res.render('account.hbs', profile)
});
}).on('error', function(err) {
fs.unlink(localPath);
console.log('error in form details')
return res.render('account.hbs')
});
} else {
// render the view
return res.render('account.hbs', profile)
}
} else {
// if input validation fails, we just render the view as is
console.log('error in form details')
return res.render('account.hbs')
}
}
The above JavaScript code might be a bit hard to reason about without better code highlighting, so let me show you the same result as it is formatted in the IDE. Can you spot any issues here? How many vulnerabilities did you count?

Snyk AI Trust lives up to its name and secures Gemini Code Assist
All these LLMs and agentic workflows that were already adopted in our day-to-day tasks are prone to producing security risks in different forms.
One of those security risks is where GenAI coding assistants produce insecure code that attackers can exploit against us. The above walkthrough is a solid representation of the security vulnerabilities that lurk when trusting AI. Snyk is here to mitigate that risk and allow you to watch over those generated lines of code and fix them.
When I saved the file after the modified changes from Gemini Code Assist, the Snyk Security extension that’s already installed in my VS Code IDE, kicks in to scan the file for changes and finds vulnerable code that it highlights with the familiar red squiggly line that alerts developers:

Merely one line of code, introducing a risk of two different kinds of vulnerabilities:
We’re fetching the URL provided from the user and could potentially open up a connection to a remote HTTP endpoint (and not an HTTPS), hence Snyk is warning us about insecure protocol usage that could result in cleartext transmission of sensitive information. You can read more about this security risk in Snyk Learn’s Cleartext storage of sensitive information in a cookie which explains this in a cookie concept.
We’re also just blindly making an HTTP request to a URL that the user controls without validating it? What if that so-called image URL is actually an internal resource like http://127.0.0.1:8080/admin_get_resources of an internal company or local developer machine asset? That’s an SSRF vulnerability. There’s a quick Snyk Learn lesson for this topic as well if you want to dive into the risks and mitigations: Server-side request forgery (SSRF)
Ok, but can we improve this use of AI coding assistants to already generate secure code? Follow along!
Pairing Gemini Code Assist with Snyk via MCP for secure agentic workflows
The Gemini Code Assist supports the Model Context Protocol (MCP) in its agentic mode, which means that it can send out MCP tool requests to other integrations that augment the model's generated response to then adapt its output based on the tool’s response.
Snyk also supports MCP, which means that it exposes tools for LLMs such as scanning your open source dependencies and scanning for insecure code that LLMs generate. Here’s a deep-dive demo of Snyk MCP.
Let’s update the Gemini Code assist MCP configuration to run Snyk. Create the file .gemini/settings.json
and add this:
{
"mcpServers": {
"snyk": {
"command": "snyk",
"args": [
"mcp",
"-t",
"stdio",
"--experimental"
]
}
}
}
If you haven’t yet, you need to install the Snyk CLI:
npm install -g snyk@latest
And then authenticate by running the following command and following up on the instructions to log in:
snyk auth
To confirm everything is working, we can ask Gemini through a simple prompt:

So now, we know that we can pair both Gemini Code Assist and Snyk and test the results of code generation for the exact same task as we originally did: persisting to disk the user’s provided picture from a URL they specify.
Here is the prompt, and we are set to go.

Throughout the code generation by Gemini Code Assist, it will invoke Snyk MCP Server to test the code every time that the LLM generates and updates code, to ensure that the code is free of vulnerabilities.

The coding session lasted for almost 10 minutes with some back-and-forth between Gemini Code Assist and Snyk, and goes to demonstrate how complex it can be to write secure code. Here is the full walkthrough:

Snyk AI Trust for GenAI code
If you’re curious to learn more about securing GenAI code, I highly recommend the following resources:
Vibe Coding? You should be concerned about insecure coding. Read up on my real-world example: Vibe Coding a Node.js File Upload API Results in Security Vulnerability
New to MCP? A Beginner's Guide to Visually Understanding MCP Architecture
An introduction to MCP Security? Read up on my MCP Security – What's Old is New Again
AI Readiness Cheat Sheet
Build trust in AI
Get a practical, structured guide to help your team embrace AI without introducing unmitigated risk.