Skip to main content

5 tips for adopting AI code assistance securely

blog-feature-ai

May 30, 2024

0 mins read

There’s been a lot of excitement around generative AI technology over the past few years, especially in software development. Developers of all levels are turning to AI tools, such as GitHub Copilot, Amazon CodeWhisperer, and OpenAI’s ChatGPT, to support their coding efforts. In fact, GitHub found that 92% of developers use AI coding tools.

However, many businesses are realizing that they need to be more cautious when using AI in software development. Generative AI is still prone to inaccuracies and hallucinations and could open organizations up to new threat vectors, such as data poisoning and prompt injection. 

The key to using AI is not to eliminate it or let teams use it unchecked but to think strategically about guardrails for safely leveraging AI-generated code. 

We’ll cover five tips for using AI code assistants securely throughout the software development lifecycle, including:

  1. Always have a human in the loop.

  2. Scan AI code from the IDE with a separate, impartial security tool.

  3. Validate third-party code.

  4. Automate testing across teams and projects.

  5. Protect your IP.

Always have a human in the loop

Generative AI is essentially a junior developer with access to millions of code examples from across the internet. Even though they work quickly, AI coding assistants are far from infallible, as they pull from both good and bad code as training data. So, it’s crucial to include sufficient human checks when adopting generative AI code tools. Teams can include a human in the loop with the following practices:

  • Performing the same code security testing used pre-AI, such as validating, testing, and fixing vulnerabilities in the IDE with static application security testing (SAST).

  • Conducting regular training about the benefits and risks of AI-generated code.

  • Building out policies and procedures for performing regular reviews on AI-generated code.

Scan AI code from the IDE with a separate, impartial security tool

When performing security testing on your AI code, it’s best practice to use a tool other than the one that generated a significant amount of your code in the first place. A separate security tool ensures impartiality. Plus, different tools specialize in different disciplines. An AI tool that aims to create functional code won’t know the full context of your security posture or have the functionality to understand complex security nuances. 

There are two primary criteria to consider when choosing this separate, impartial tool for securing your AI-generated code:

  1. A full, contextual view of your entire application — not just incomplete views of the individual AI-generated code snippets. This way, it can provide fix suggestions that won’t break the rest of your app.

  2. Scanning capabilities within the IDE to shift security as far left as possible. This way, developers can fix vulnerabilities in both their AI-generated and manually-written code moments after it’s written. 

Validate third-party code

Open source code makes up 70-90% of the average application. Not only do developers tap into these third-party resources regularly, but AI-written code also leverages third-party dependencies. While open source code is a great way for developers to build applications with more velocity, it can vary in quality and security level. AI-recommended open source libraries can also bring risks, as AI often isn’t up to date on the latest security intelligence and can potentially recommend third-party resources that are outdated or known to be vulnerable. 

Software composition analysis (SCA) helps organizations find and fix vulnerabilities within these third-party resources. It can scan the dependencies selected by humans or AI by identifying vulnerable open source packages, reporting on these vulnerabilities, and suggesting remediation paths. When using AI coding assistants, teams must test and verify AI-recommended open source libraries with a high-quality SCA tool

Automate testing across teams and projects

As your team considers how to implement these code security best practices for human- and AI-written code, identify places where you can leverage automation. AI coding assistants have made development lifecycles more fast-paced than ever, and if your security testing isn’t automated, it won’t be able to keep up with this unprecedented velocity. The key to successful automation is adding security testing into the workflows that development teams already use, such as CI/CD pipelines.

Protect your IP

Teams should also consider the code used to prompt AI coding assistants and identify ways to prevent staff from accidentally exposing sensitive data in one of these prompts. AI tools generally use their customers’ prompts as training data, meaning anything you write in a prompt is up for grabs. Because of this reality, it’s crucial not to input sensitive or proprietary code into an AI tool. Here are a few ways to exercise caution when prompting AI tools:

  • Document detailed AI usage policies and train teams on these guidelines, including the potential consequences of not adhering to them.

  • Assume that any data you input into an LLM will be used in its training.

  • Only input the minimum information required for an AI tool to do its job.

  • Consider using input and output checks to sanitize inputs from users and outputs from the AI tools.

Learn more about how Snyk empowers the secure adoption of AI coding assistants.

Posted in:
blog-feature-ai

Best practices for AI in the SDLC

Download this cheat sheet today to learn best practices for how to leverage AI in your SDLC, securely.