Skip to main content

A security expert’s view on Gartner’s generative AI insights - Part 2

blog-feature-ai-violet

8 de agosto de 2024

0 minutos de leitura

Welcome to the second part of our two-part special on Gartner’s “4 Ways Generative AI Will Impact CISOs and Their Teams” report! If you’ve missed the first part on model composition, you can read it here. Today, we will explore why security specialism matters in an AI security tool, particularly where AI quality is concerned.

Why the specialism of experts?

Specialism to address complexity

Security is a highly complex area, and code quality is a different beast. The Report’s recommendation that businesses should "choose fine-tuned or specialized models that align with the relevant security use case or for more advanced teams" is exactly what we’ve been doing and advocating for. Gartner also goes on to state that "generative cybersecurity AI on real-time content (e.g., data in transit, network traffic) might take more time to arrive, as it is likely to require specialized (and maybe smaller) models trained on such data." We couldn’t agree more, with our tailored AI model trained on specific data. In fact, we’d already been providing this generative cybersecurity AI on real-time code ahead of the Report, announcing DeepCode AI Fix, our one-click, in-IDE, real-time autofixing feature within our Snyk Code SAST tool, on June 7, 2023. 

Not only is our AI model customized for security, but it’s also proprietary — allowing Snyk to remain independent of AI coding assistant brands. What does this mean? Well, the fix-generating component security tools or features created by the same brands that created AI coding assistants rely on the same LLMs behind these same coding assistants. These LLMs were trained for code functionality, not security. Our LLM was created and trained specifically to secure code. So, our fixes are more reliable than those coming out of a more “general-purpose” LLM. 

Security expertise — a key differentiator

While the Report’s notes about secure code assistants targeting use cases for either development teams or security teams is largely true, Snyk bucks the trend. We believe that specialism is the single most important thing that sets a security tool apart from the rest, and it’s this quality that allows us to satisfy the needs of both developers and security. Our experience has shown us that our deep understanding of application security is how we’ve created a tool that successfully straddles both security teams and developer teams, addressing the need for intelligent accuracy as well as raw speed.

Driving accuracy in a complex area

AI security is a very niche area, and it’s not easy to spot security issues, yet accuracy is important. The methods used when developing an AI model greatly contribute to how robust a security tool is. Our technology and process came out of our collective knowledge, and the latter is ultimately the reason why we have more accurate and streamlined results. 

Making developers happy with fix prioritization

Having input training data, processes, and outputs constantly fine-tuned by security experts results in an AI-powered tool that is laser-focused on strategic security. This means finding and fixing issues relating to impactful, secure code. Instead of overwhelming teams with a growing list of vulnerabilities, code security tools should offer you the option of creating custom rules to tailor the results that you want to see in accordance with your own policies. And present centralized reports of security issues across teams, so people can focus on fixing the issues most impactful for, and unique to, the business.

Making security teams happy with full application visibility and protection

In the Report, Gartner anticipates that if generative AI becomes increasingly more proficient at uncovering new vulnerabilities in code, “zero-day attacks (i.e., attacks exploiting undisclosed vulnerabilities) will become more common” and that “this will accelerate the development of more supply chain attacks against widely deployed and privileged applications. This will likely be an area where rapid reactive response is necessary. Thus, end-user organizations should ... evaluate tools and services to monitor their software dependencies." 

It’s not if, but when, generative AI will start speeding up zero-day attack occurrences. Therefore, a security tool that protects organizations across the software development life-cycle, with native integrations and a consistent, comprehensive overview of your security program, will give businesses a competitive advantage.

Helpfully, Snyk proactively captures and neutralizes risks in the IDE before they proliferate in your pipeline, and it also continues to protect businesses across the software development life-cycle. For example, Snyk Open Source allows for easy generation of SBOMs and usage of the data within for useful insights. Snyk’s AI-powered, expert-supported speed and end-to-end coverage were recently evidenced by our early discovery of the Leaky Vessels zero-day vulnerability

Defending against unknown risks like zero-day vulnerabilities is tricky, but you can manage the degree of chaos you encounter when the unexpected happens with planning and structure. Having an overview of your security posture and program, understanding its success, where the gaps are, and issue/fix trends helps leaders with decision-making, strategy creation,  and planning for an uncertain future. It’s the dream of anyone managing risk to be able to see, capture, and neutralize all prioritized risks across the organization. All the benefits we have talked about get you part of the way there, but any security expert will tell you that there is the additional problem of managing a multitude of disparate tools (especially with the added complexity of AI assets) with no consistent, consolidated view that gives you both big-picture and granular results of your security program. Snyk’s security focus had us looking for ways to solve this problem — which is how Snyk AppRisk came about.

AppRisk gives you a bird’s eye view of your entire security program to see what scans are running where (and whether they are being run as they should), right down to which teams/team members are accountable for what, so you could plug any gaps and take a holistic, comprehensive and more strategic long-term approach to your security posture. We can't cover every eventuality, but we can certainly help you to be better prepared for the unknown.

So what does the future hold for AI in cybersecurity?

An independent tool with a hybrid AI model supported extensively by deep human expert knowledge, Snyk drives precision, promotes a long-term, expert-led, and holistic approach to security, and strives to maintain unbiased standards in its code review by remaining unaffiliated to coding assistant providers. Because we constantly look ahead to the future, we’ve stayed several steps ahead of industry shifts, pioneering best-practice application security approaches that are now promoted by Gartner. 

So what’s next for Snyk? With AI evolving so quickly, it’s hard to say precisely what anyone will be doing in 12 months’ time, but we are building towards deep personalization and increasingly integrated workflows, and you can be sure that we will continue to secure your applications, whether AI-powered or not.

The vision is that we propel the surge of innovation by helping teams embrace AI easily, without the mental load of worrying about whether security is adequate. Trust your security, and you will be confident in your AI. With the right security solution and the right security partner for your AI, you can finally innovate freely.

Publicado em:
blog-feature-ai-violet

Best practices for AI in the SDLC

Download this cheat sheet today to learn best practices for how to leverage AI in your SDLC, securely.