Skip to main content

A security expert’s view on Gartner’s generative AI insights

illustration-hero-ai

7 août 2024

0 minutes de lecture

Snyk’s goal has always been to empower developers to build fast but safely. This is why we created the developer security category and why we were amongst the first advocates of “shifting left.” Now, AI has changed the equation. According to Gartner, over 80% of enterprises will have used generative AI APIs or models, or deployed their own AI model, by 2026. There is a strong push for progress, whether or not security is ready — Snyk’s 2023 AI-Generated Code Security Report shows that 80% of developers are bypassing security policies to leverage the power of AI coding assistants, and Gartner has similarly found that:

“Eighty-nine percent of business technologists would bypass cybersecurity guidance to meet a business objective. Organizations should expect shadow generative AI, especially from business teams whose main task is to generate content or code.”

Through this seismic shift, our vision has not changed. We want to empower developers to safely increase productivity with AI.

AI technology has vast, transformative potential. By lightening the mental load and putting control of code security back into the hands of the security and engineering teams, Snyk Code helps increase trust in AI, leading to greater innovation. Anyone can put a security tool in place to tick a box, but we focus on outcomes. 

Are you truly secure if your security team is overwhelmed and your developers will not use a security tool that slows them down, presenting them with even more false positives? 

Are you really getting ahead by adopting AI-powered coding assistants with no guardrails or unsuitable guardrails in place? 

And what would it look like to partner with a security specialist who would not just get you started but also work hand-in-hand with you to guide you through the uncertain waters of the current and future security landscape?

Our focus on outcomes and partnering with customers has brought us success, achieving the position of Leader in both The Forrester Wave: Software Composition Analysis (SCA), Q2 2023, and the 2023 Gartner® Magic Quadrant™ for Application Security Testing. However, trail-blazing can be lonely. So, we were delighted to finally see Gartner’s “4 Ways Generative AI Will Impact CISOs and Their Teams” (29 June 2023) (the “Report”) echo what we’ve been advocating for. We’re going to share how you can take action on some of these Report insights in this special two-part series so that you can benefit from our experience.

The most prevalent CISO concerns

In the Report, Gartner notes that copyright violations, biased, incomplete, or simply erroneous responses, policy violations, and lack of transparency are some of the most common areas of concern for CISOs. If you haven’t suitably secured your AI-generated code yet, a good starting point would be to: 

  1. Create a quick framework for AI governance that specifies the governance entity and responsible persons/security champions within your organization, sets out the governance workflow, and stipulates acceptable-use policies (be sure to unambiguously communicate this when announcing the new governance measures).  

  2. Conduct regular employee training and make educational resources available, both for your security champions and AI users, around the risks associated with using generative AI and how to use it responsibly. 

  3. Start exploring potential modern security solutions to accompany your chosen AI coding assistants. Ensuring that your security solution is not affiliated with your AI coding assistant is important for preserving impartiality and unimpeachable standards in code audits. As you can see, the idea is to have multiple layers of safeguards, so that there is no one single point of failure. 

Because so many solutions are popping up every day, you may find it helpful to map your long-term security strategy to the values and offerings of the leading solutions in application security, using reputable analyst reports like those from Gartner and Forrester to guide a best-practice approach to AI security.

Why AI for code security at all?

Where AI is concerned, the two keywords are “speed” and “trust”. Your security must move fast enough to keep up with AI-generated vulnerabilities and maximize productivity gains brought about by AI coding tools. And you must be able to trust your security tool to scrutinize, capture, and auto-remediate AI-generated issues reliably and efficiently. 

Let’s start with speed. The increased volume and potency of vulnerabilities in current AI-generated code means that your AI coding assistant must be partnered with a SAST security companion. But if your SAST security tool isn’t AI-fast, it will soon have a backlog of potentially vulnerable code to review, slowing down developers and eroding the productivity gains that your engineering team has made using AI coding tools. Protecting code from within the IDE can be complex from the viewpoint of building a SAST tool, but it’s worth doing. Being in the IDE allows your security tool to stop problems from proliferating across your pipeline, so you don’t have to chase down scattered vulnerabilities and fire-fight constantly. This presence in the IDE, having AI speed, and allowing for semi-automated remediation, would help your security tool to keep up with your AI-assisted developer teams, and proactively find and fix security issues in your code in real-time. 

AI in cybersecurity and the ingredients for trust: Hybrid AI and expertise

As mentioned above, AI transparency is a common issue in many organizations. While there will always be a certain amount of unknown, what matters is whether the risk is sufficiently low for your organization. With this in mind, we’ll talk briefly about how Snyk manages risk in AI-driven security — with a hybrid AI model and as much human specialism and oversight as possible. 

Why a hybrid model? 

Businesses have reservations about trusting AI-generated outputs because of varying opacity levels in the workings of AI models — primarily generative AI models. We understand this. This is why it’s so important to have a security tool check over all your AI-generated code, because although manual code reviews exist, mistakes happen, and people are overly trusting of AI-generated code. Our aim is to give teams the confidence to trust their security, and by extension, their AI-generated code. This is why we have also considered promoting more reliable SAST results in Snyk Code through security-focused expertise and a proprietary hybrid model. 

Gartner’s Report recommends that businesses planning to use AI “adapt to hybrid development models, such as … custom GenAI applications with in-house model design and fine-tuning”. Snyk has been doing this for years because we understand security and the need for robust standards. Our AI, DeepCode AI, is the technology that powers the Snyk platform, including our leading SAST tool, Snyk Code. DeepCode AI combines symbolic AI, generative AI (a subset of machine learning), and several other machine learning methods with extensive security-focused fine-tuning. In both parts 1 and 2 of this blog post, we will be focusing on how the technology behind Snyk Code works to benefit users in different ways.

Symbolic AI

Let’s start with the symbolic AI part of our AI model before moving to machine learning. We use symbolic AI to parse code to a graphical form, then apply rules, using symbolic AI to test for flows across the entire application — seeking out sources, sanitizers, and sinks, rather than string matches. In this way, we have full code visibility and generate more accurate vulnerability results than other solutions. The aforementioned rules are created manually by our security experts, working in tandem with machine learning algorithms. This human collaboration increases consistency and accuracy, helping users to trust their security tool’s outputs. 

Machine learning

We use machine learning to automatically scan permissively licensed open source databases, feeding our rules database knowledge of relevant languages, plugins, frameworks, and ecosystems, so our security analysts can work with our machine learning outputs to create more rules that cover new findings or improvements. Vulnerable code detected by our symbolic AI is reduced to only the essential elements by our patent-pending CodeReduce technology for improved processing focus and speed, then fed into our generative AI vulnerability autofixing model. The latter powers DeepCode AI Fix, Snyk Code’s powerful, over 80%-accurate vulnerability autofixing feature. This autofixing model will produce suggested fixes for vulnerabilities surfaced by Snyk Code’s SAST scanning, and these suggestions will be automatically pre-scanned by Snyk Code’s symbolic AI to ensure that proposed fixes do not create new security issues before up to 5 issue-free fix suggestions are returned to the user. The user can then apply their chosen fix with a single click. This is another reason why Snyk Code produces updated, fast, and precise SAST results and fixes that empower developer productivity. All of this, including the initial SAST scan that surfaced the vulnerabilities, happens in seconds, underneath the hood of our AI machine.

The benefits of a hybrid model with human tweaking

This hybrid model with human fine-tuning reduces generative AI errors and hallucinations and significantly increases accuracy — Snyk’s SAST scans achieved a 72% OWASP benchmark accuracy compared to the 53% accuracy of a renowned developer brand’s scans. Our approach allows us to drive down false positives and uncover relevant security issues while reviewing applications completely. With ease of use and developer adoption in mind, if a security tool only leveraged one of the previous AI methodologies, it wouldn’t be able to provide a fast yet complete scan in the IDE, detect issues, and suggest fixes, all in one flow. The developer would then have to repeat workflow steps, reducing productivity gains from using AI coding tools and potentially applying insecure fixes.

Finally, our multimodal approach ensures that we don't regress to the mean, despite crowdsourcing our dataset. As explained above, our human experts don’t just curate the sources of our training data. They also constantly check and tweak the processes and outputs to maintain state-of-the-art standards in our scanning results and fixes. Ensuring that humans are frequently present across the AI loop helps to introduce cognitive discretion and boost precision in AI-powered tools.

This concludes part 1 of our two-part special on Gartner’s “4 Ways Generative AI Will Impact CISOs and Their Teams” report. Join us tomorrow for part 2, where we’ll look at why security specialism and having security tools built only for security are so important, particularly where AI is concerned. In the meantime, if you have more questions about why you need SAST to secure AI-generated code, check out our blog here.

Publié dans:IA

Snyk est une plateforme de sécurité des développeurs. S’intégrant directement aux outils, workflows et pipelines de développement, Snyk facilite la détection, la priorisation et la correction des failles de sécurité dans le code, les dépendances, les conteneurs et l’infrastructure en tant que code (IaC). Soutenu par une intelligence applicative et sécuritaire de pointe, Snyk intègre l'expertise de la sécurité au sein des outils de chaque développeur.

Démarrez gratuitementRéservez une démo en ligne

© 2024 Snyk Limited
Enregistré en Angleterre et au Pays de Galles

logo-devseccon