Skip to main content

Snyk is your security companion for Amazon CodeWhisperer

feature-snyk-codewhisperer

29 de novembro de 2023

0 minutos de leitura

Your developer teams plan to adopt a generative AI coding tool, but you — a security leader — have compliance and security concerns. Most important of which being, what if you can’t keep pace with your developers and something significant slips through the net? Luckily, you can stay secure while developing at the speed of AI with Snyk, the security companion for Amazon CodeWhisperer.

Everyone is urging the adoption of generative AI tools to stay competitive. Developers are no exception, eager to use tools like Amazon CodeWhisperer so that they can produce code faster. This progress is all well and good, but what about regulation, compliance, and security? You are held to a higher standard of care as a security professional, and the responsibility for any security failings, whether resulting from a plan signed off by other departments or not, will fall squarely on your shoulders. To reliably manage risk without holding back progress, you’ll need a super-fast, developer-friendly security tool that is as focused on security as you are: Snyk.

What is Amazon CodeWhisperer?

Amazon CodeWhisperer is a generative AI (GenAI) coding tool, described by Amazon as a "general purpose, machine learning-powered code generator." Note the choice of the word "generator." This underlies the fact that it simply automates the creation of code.

Like other GenAI tools, CodeWhisperer "learns" by mass-scanning code repositories (in this case, in AWS CodeCommit) for code patterns that are repeated to a high degree in certain settings and then mass-matching these patterns against a developer’s code in the current file within which the developer is working. CodeWhisperer then generates best-guess, high-probability code suggestions in real-time within the IDE, based on the code patterns it has matched up. These suggestions will appear either through the CodeWhisperer console or as comments in the developer’s code, a little like Grammarly suggestions, and the developer can choose the suggestion that suits them best. It’s easy to see how CodeWhisperer is a powerful tool that can 10x developers’ productivity, minimize tedious tasks, and change the way developers work, forever.

But is the code that it generates secure by default?

Yes, AI generates vulnerabilities

LLMs (large language models, a subset of GenAI that is trained on text and produces text output) like CodeWhisperer make statistical predictions based on what they see happening most. Imagine mapping a host of data into a bell curve, then basing a prediction on whether something falls into the center of the bell curve. That’s really what a generative AI does. Therefore, the accuracy of CodeWhisperer depends on commonly seen, repetitive, and predictable code patterns, and accuracy will be patchy at best with more unusual settings or uncommon code sequences.

There is also the common problem that all generative AIs face hallucinations and the issue of code output only being as good as the code on which the GenAI model is trained. With the latter, it's important to remember that if a model is trained on insecure code it will make insecure suggestions. So CodeWhisperer can help developers build quickly, but can't always guarantee security and accuracy. After all, generative AI has no reasoning capabilities, so there is no true contextual understanding of different or more nuanced scenarios, no ability to recognize the incorrect or insecure code that it has generated, and no ability to flag problematic code for the developer’s attention. 

No, generative AI code tools cannot fully secure their own code fully 

Spotting these security flaws is a complicated, multi-layered task, as vulnerabilities are not isolated — they happen across code. Most vulnerabilities happen over multiple code blocks, functions, and files. Finding these weaknesses requires an extensive understanding of the security issues and of how the entire application works. 

The complex nature of code vulnerabilities exposes yet another problem for CodeWhisperer and similar AI code assistants. When a developer uses an AI coding tool like CodeWhisperer, the developer prompts it with a task and the tool attempts to produce code that matches the prompt. However, because generative AI works in small blocks and snippets of code, the AI coding tool does not try to understand the entire application around that snippet. CodeWhisperer can try to replicate code that it finds elsewhere in the application, but that doesn’t mean it’s learning how the application works — just the code patterns of that developer. However, because every new snippet, line or block of new code affects data flow through the application, finding security issues requires considering the context of the full application. There’s no other way to accurately identify these flaws. 

To sum it up, developers working with CodeWhisperer without security guardrails is a problem: 

  1. Because of the way generative AI technology works, CodeWhisperer is unable to check its work to determine whether what it produced was right or wrong. It relies on humans to make corrections. 

  2. An experienced developer using CodeWhisperer will be able to spot most of any functional issues in the code generated by CodeWhisperer, but few developers have the skill, or the time, to find every security issue.

  3. Worse, inexperienced developers may be too trusting of CodeWhisperer’s output, especially as AI tools become commonplace, and they may miss both the functional and security issues within the AI-generated code.

CodeWhisperer is a powerful tool that will definitely drive developer productivity, but doing this properly and safely will depend on whether they have security guardrails in place and the quality of the security tool they have. This means that one should always use CodeWhisperer (or any other generative AI coding assistant) with a companion security tool like Snyk.

Security at the speed of AI

Security assistants need to be just as fast as the AI coding assistants that developers are using or they won't be adopted. But that's just the start. Here are some some important features of the security tool you use, all of which are important features of Snyk.

Fast, with broad coverage

A truly effective security tool covers a broad range of languages (Snyk covers over 19 languages) and captures vulnerabilities before they make it out of the IDE. Snyk is also up to 50x faster than other solutions thanks to our proprietary hybrid AI and its security-focused methodologies. It checks your source code in real-time directly from within your IDE, where your developers — and CodeWhisperer — live.

Automatic

Snyk has poured our security experience into a tool purpose-built for securing code, with a developer user base in mind. We understand that developers code in a variety of languages and that their expertise and priority lie in building, not security. So, security checks should be super-fast and automated, and fixes should be seamless to implement, making for better long-term adoption of the security tool.

Workflow-optimized and thorough

Snyk automatically checks developers’ code in the IDE as they work, red-lining vulnerabilities, giving explanations for the findings (no obscure references!), and automatically suggesting in-line fixes in real time, so that developers can remediate their code literally as they create it. The entire application — not just the current and dependent files — is evaluated for full context, before and after a potential fix is generated, to find complex code security issues that can spread across functions and files. And as any security person knows, understanding the data flows, code interactions, and entire complexity of the application is important for upholding security standards. Pairing Snyk with CodeWhisperer overcomes the security shortcomings of AI code generation.

Accurate and customizable with centralized reporting

Snyk also understands how time-poor both developer and security teams are, so we know it wouldn’t matter how fast Snyk is if we generated too many false positives. Some code security tools either don’t give you the option to create custom rules for filtering out a barrage of false positives, or require you to manually create rules to handle the slew of false positives that would otherwise occur. This is not the case with Snyk. 

With the help of DeepCode AI, our security experts maintain out-of-the-box rules and an extensive knowledge base full of vulnerable code patterns and suggested fixes, increasing accuracy and driving down false positives. This means that you’ll only see a few relevant, important results from your security checks, instead of a long list of meaningless weaknesses. With Snyk, you can also write custom rules to further fine-tune your results and to meet your own unique organizational needs. 

On top of that, Snyk’s native reporting function considers the needs of security leaders. It consolidates a centralized view across issues and teams, and allows for detailed filtering, so that when used together with Snyk Insights, informed tactical and strategic decisions can be made, and relevant issues can be prioritized.

Tool and technology independent

An important factor for best practice and good governance is independence. We recommend that you always double-check that the folks behind your AI coding tools are totally separate (no direct or indirect connections) from the squad keeping your code safe. Having related parties handle both is a bit like having the fox guard the hen house. At Snyk, we're the security pros – zero ties with AI coding tools, just laser-focused on being your risk management sidekick.

Loved by developers

We at Snyk are proud of the fact that we haven’t just won the security folks over, we’ve won the hearts of developers too. This is evidenced by the intense rate of adoption of Snyk by household names like Google, Reddit, and ABInBev. Developers find integration and adoption of Snyk frictionless, because we fit right into their existing workflows and tools, integrating across IDEs, SCM/Git, and CI/CD. Nicolai Brogaard, Service Owner of Software Composition Analysis (SCA) and Static Application Security Testing (SAST) at Visma, says:

"The problem with a lot of these security testing tools is that they require so much background knowledge, so you can’t really just plug-and-play them in your environment. So one of the differentiating factors with Snyk is enabling developers to quickly get started and figure things out themselves.” 

Trusted by the industry and customers 

Snyk was named a “Leader” in the 2023 Gartner Magic Quadrant for Application Security Testing, a “Leader” in the 2023 Forrester Wave for Software Composition Analysis, and Snyk was the 2022 Gartner Peer Insights Customers’ Choice for Application Security Testing. These accolades were won through Snyk’s continuous work in finding ways to shift security as far “left” as possible, whilst driving for a rigorous security approach, without requiring any developer behavioral change or workflow disruption.

More AI resources from Snyk