Skip to main content

Foundations of trust: Securing the future of AI-generated code

Artikel von:
Danny Allan
Danny Allan
blog-feature-ai-pink

10. Oktober 2024

0 Min. Lesezeit

Generative artificial intelligence (GenAI) has already become the defining technology of the 2020s, with users embracing it to do everything from designing travel itineraries to creating music. Today’s software developers are leveraging GenAI en masse to write code, reducing their workload and helping reclaim their valuable time. However, it’s important developers account for potential security risks that can be introduced through GenAI coding tools. 

As a leader in developer security, Snyk believes in GenAI-driven development that is both fast and secure. That’s why today, we are proud to share that Codieum, Tabnine, TabbyML, and Qodo are Early Access members in our growing GenAI Program, an initiative to partner with the industry’s leading code generation solutions to together secure the technology powering the GenAI Era. We’re aiming to increase trust in AI-assisted software development by integrating Snyk’s security capabilities into these best-in-class AI coding tools. 

Developers are going all in on GenAI 

Developers are always looking for ways to work faster and more efficiently, which explains why the adoption of GenAI coding has been rapid and widespread. According to GitHub, AI coding tools such as Google’s Gemini Code Assist, GitHub Copilot, and Amazon CodeWhisperer are used by 92% of U.S.-based developers both in and outside of work. 

It’s no surprise these tools are so popular with developers who say GenAI coding assistants help improve code quality, generate better test cases, and speed up programming language adoption. But an uncomfortable question lingers for many developers who have joined the GenAI revolution: How secure is the code these GenAI tools produce?

Common security issues with AI-generated code

In our 2023 AI Code Security Report, we found that more than half (56.4%) of developers frequently encounter security issues in AI-generated code. That’s because AI-generated code is often trained on open source data that may contain inaccuracies and/or vulnerabilities. It also doesn’t help that 80% of developers bypass AI code security policies, posing problems for AppSec teams that often struggle to keep up with the speed of AI-assisted development.

There are an estimated 35 million developers worldwide, but only about four million security practitioners and roughly three million unfilled positions. This asymmetry means security teams simply don’t have the workforce to keep pace with the sheer volume of code developers are producing. As a result, GenAI code can be rife with security challenges including: 

  • Inaccuracies: Code that doesn’t function as intended or contains errors.

  • Hallucinations: Nonsensical code or code unrelated to the input.

  • Vulnerabilities: Security flaws in code that hackers can exploit and attack.

  • Data poisoning: AI training data that is intentionally manipulated by bad actors.

  • Prompt injections: Manipulated prompts that force the AI to generate unintended or harmful outputs.

Our mission at Snyk is to secure ALL code, whether that code is written by a developer or an AI-powered tool. Guided by our “shift left” ethos, we believe the best way to do that is to empower developers throughout the SDLC by securing applications proactively and without friction. 

Snyk bakes security into AI-generated code

The state of GenAI in the software development world currently feels a bit like the Wild West, with minimal industry regulations or compliance around securing AI-powered coding workflows. At the same time, developers aren’t going to give up AI coding assistants in the name of security - and they shouldn’t have to! 

Through our incoming GenAI Partner Program, Snyk will work with leading GenAI coding assistants to embed our security testing and remediation controls directly into our partners’ solutions and IDE extensions. This integration will enable developers to create code without having to stress about security, as Snyk will be able to secure AI-generated code in real time, as it’s produced. If any security problems are identified within the AI-generated code, Snyk can promptly remediate them before they become a real threat. 

In our State of Open Source Security 2023 report, we found that 59% of respondents are concerned that AI tools will introduce security vulnerabilities into their code, and 50% are concerned AI will introduce licensing violations into their code. This skepticism around coding assistants diminishes their efficiency since they force developers to spend additional time manually scrutinizing AI-generated code for risks. That’s all about to change. 

Snyk is helping developers become more innovative and productive by strengthening their GenAI development workflows with built-in, real-time static application security testing (SAST) guardrails. They can feel confident using AI to try new things, be more creative, and accelerate their workflows without losing sleep over security risks, making for a more satisfying work experience and happier developers. 

Securing the future of GenAI coding assistants 

Securing AI-generated code is the key to ensuring that developers can harness the power of GenAI without compromising on security. As the adoption of AI tools continues to rise, integrating robust security controls will be crucial to modern software development workflows and the GenAI-powered software supply chains they produce.

Watch our DevSecCon panel where Snyk and our Early Access partners discuss best practices for using GenAI to build secure software at scale. 

Gepostet in:KI, Code-Sicherheit