Snyk & Atlassian: How to embed security in AI-assisted software development

Written by:
blog-feature-snyk-atlassian

February 14, 2024

0 mins read

Adding AI to your software development life cycle (SDLC) comes with great opportunities — and great dangers. Is the risk worth the reward? 

This was the topic of conversation when Sascha Wiswedel, Senior Solutions Engineer at Atlassian, and Simon Maple, Principal Developer Advocate at Snyk, teamed up to discuss security in the (AI-assisted) software development lifecycle

To maximize benefits and mitigate risk, you need to ensure that trust and security are woven into every process, project, and product your software teams handle. This has always been the case, but it’s especially true now that AI is fundamentally changing how code is written and maintained.  

The risks and rewards of AI for software development 

The first step to effectively managing AI in software development is understanding where it's being used. Some of the most common use cases for AI in development include: 

  • Generating code 

  • Summarizing code 

  • Adding comments 

  • Writing a README 

  • Refactoring code 

  • Providing templates 

  • Pair programming

Code generation, using generative AI to help developers write code inside the integrated development environment (IDE), is the area that stands to offer the greatest productivity benefits — and the highest potential for introducing risk. 

“Most commonly, we are seeing developers use AI tools like GitHub Copilot and Amazon CodeWhisperer, which have become incredibly popular,” said Simon. “But they often don’t fully understand the risks that come along with using these tools.” 

AI is undoubtedly reshaping software development in the tech community. Snyk's 2023 AI Code Security report found that 96% of software developers are already using AI coding tools, and research from McKinsey determined that developers can complete coding tasks up to twice as fast with generative AI. 

At the same time, these tools can instill a false sense of confidence regarding security. That same Snyk AI report, as well as a Stanford study, both found that developers with AI access write significantly less secure code yet were likelier to believe they wrote secure code than those without access. 

Issues with generative AI for software development 

Widespread use of generative AI among developers has led to more speed, producing more code with more security issues. The two primary causes of issues with AI-assisted development are: 

  • Bad training data 

  • Hallucinations 

The large language models (LLMs) that power AI coding tools require huge amounts of training data to operate. This includes a significant amount of open source code which naturally contains vulnerabilities. Because AI doesn’t actually understand code and has no concept of rules or syntax, it often produces vulnerabilities after learning from open source repositories. Malicious threat actors can even intentionally poison training data

Conversely, hallucinations occur when AI models generate code that isn’t grounded in training data learnings. AI may get “creative” and give a nonsensical or incorrect response because it is programmed to complete the next token (a word or line of code), even if it doesn’t have the proper inputs. 

“When using generative AI to create a story or poem, I might want some degree of imagination, but in a very strict rule-based system like software development, we need to stick to the facts,” said Simon. 

Mitigating the security risks of AI-generated code 

What’s the best course of action if developers are full steam ahead with AI code generation, yet these tools can sometimes be wrong and won’t know when they are? 

First, it’s important to create and maintain AI policies and company guidelines to document the acceptable use of this technology. Consider the security and data privacy requirements you must meet to remain compliant with applicable regulations and frameworks. Policies should also be designed to protect your intellectual property (IP) when integrating with third-party tools. 

According to Simon, “Protecting sensitive customer data is non-negotiable, but it’s equally important to make sure that your IP stays within your boundary so there is separation between this information and any public LLMs.”  

However, even the most comprehensive, air-tight policies can’t guarantee everyone will follow them, because rules can’t (and shouldn’t!) stop the march of progress. Developers who work for companies with strict policies (e.g., ban AI or only allow it in specific scenarios) will often circumvent these policies so they can still enjoy the benefits of AI. In fact, Snyk’s 2023 AI Code Security report found that 80% of developers bypass their company’s security policies to use AI. 

In light of this information, you should assume that most of your developers use AI to some degree in their work. And in order to have oversight of what’s being leveraged in software development and how, it’s better to enable the safe use of AI coding assistants to drive developer productivity. When it comes to using AI-assisted code, you need to have a system that allows you to verify everything to the best of known standards. An integrated, Gartner, and Forrester-approved, state-of-the-art solution like Snyk is a reliable way to test and validate your code to mitigate AI risks. Here’s how it works: 

  1. Developers use AI tools to generate code in the IDE. 

  2. Snyk scans the code in real-time and flags vulnerabilities. 

  3. Snyk recommends fixes that can be applied with a click. 

“It really does take a village to maintain quality and security in software development,” said Sascha. “But when you get your team to understand what they’re using and how to implement the right checks and balances to make sure security is never treated as an afterthought, things feel a lot easier and less hectic.” 

Snyk: Trust and security for AI code generation 

Practicing AI-assisted software development without an automated code scanning tool is like working on a construction site without a helmet. You must have the right safety measures to protect your employees, customers, and business. 

Snyk offers this essential layer of protection from your IDE, throughout the entire CI/CD pipeline. Read 10 best practices for securely developing with AI for more tips on balancing people, processes, and technology in the future of software development.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon