4 tips for securing GenAI-assisted development
Sarah Conway
18. Dezember 2024
0 Min. LesezeitGartner predicts that generative AI (GenAI) will become a critical workforce partner for 90% of companies by next year. In application development specifically, we see developers turning to code assistants like Github Copilot and Google Gemini Code Assist to help them build software at an unprecedented speed.
But while GenAI can power new levels of productivity and speed, it also introduces new threats and challenges for application security teams. The training data that large language models (LLMs) use is a mix of good- and poor-quality code, meaning they’re just as likely to introduce vulnerabilities and imperfections as junior developers. However, unlike a junior developer, these technologies generate code within seconds.
GenAI also doesn’t account for organizational policies and best practices, so it can’t adhere to compliance regulations or organization-specific security guidelines. It’s also possible for team members to unintentionally expose sensitive data by inputting it into LLMs.
As we can see, these new technologies introduce a host of security challenges that teams must consider. Today’s AppSec teams need to implement strategies for scaling up to GenAI's speed and breadth and supporting today’s development teams without slowing them down.
Security at the speed of GenAI
What are the best ways to scale up your AppSec program and align with the ins and outs of GenAI-driven development cycles?
Snyk collaborated with Deloitte’s team of AppSec experts to create a new guide to help organizations understand the effects of increasing GenAI usage. With a focus on scaling application security, while also ensuring the continuous growth and safe use of GenAI, check out the main key takeaways from Deloitte and Snyk:
Remove roadblocks with developer-first technology
GenAI-assisted coding tools appeal to developers because they’re just so easy to use. Developers type in a prompt, get an almost instant response, and then add their newly generated code to the repository. If security hinders this process in some way, it’s more likely that the developer will bypass these controls altogether. Common roadblocks caused by technology include security tools that force developers to backtrack because they found vulnerabilities too late in the pipeline or security team-centric UIs that require development teams to bounce between different platforms to fix issues. In contrast, incorporating developer-first security tools that fit seamlessly into developer workflows and work in sync with AI coding tools, will encourage safer GenAI-assisted software development.
Use training to explain the “why” behind guardrails
80% of developers reportedly bypass security measures around AI code, as they tend to trust GenAI more than human coders. So, training these developers on why they need to use real-time security tools alongside their AI coding assistants is essential. Security teams should also consider training on how to use (and NOT use) LLMs, such as which type of data is permissible for developers to copy-paste into a prompt.
Create processes that work alongside GenAI
Your team has likely already seen how GenAI increases the sheer volume of code entering repositories. As a result, it’s more important than ever to establish straightforward processes for finding and remediating security issues at a speed that can accommodate this increased volume of code. Consider refining and strengthening your processes around communication between development and security teams, code scan reviews, and vulnerability prioritization.
Update policies to align with GenAI tools
It’s also important to define clear policies around GenAI. The right policies will raise the standard of how teams use AI in their daily workflows and require strong security checkpoints. A few examples of AI policies include guidance on acceptable use cases, data selection and usage, data privacy and security, and relationship to regulatory compliance requirements. It’s also a good idea to have a cadence for updating these policies as new tools enter your organization and development practices continue to evolve.
Scaling your AppSec program to meet the challenges of AI
When your team sets a foundation of strong AI policies and guardrails, you set up your organization for success with future iterations and evolutions of these technologies. By combining Deloitte’s Secure by Design automation and orchestration services with an application security platform that enables you to find and automatically fix issues like critical vulnerabilities and zero-days early in the development lifecycle, organizations are able to:
Safely grow and scale application security across the software development life-cycle and automate workflows using a single platform
Integrate tools and automated processes for improved developer adoption
Leverage AI-powered security tools and automation to allow developers and security teams to work together to effortlessly fix issues early (as they occur) in a fraction of the time it would normally take, and reduce risk
Gain access to vulnerability remediation assistance to reduce backlogs
Expedite onboarding and adoption with rapid, consistent enablement of an application security platform to secure your first-party code - including code created by AI coding assistants, your open source dependencies, and your containers and infrastructure-as-code.
To learn more about scaling your application security program to meet the new challenges of GenAI-driven development, download our whitepaper Application Security at Scale for GenAI.
Protect your software development
Understand the effects of increasing GenAI usage in our new guide.