Snyk Research Reveals GenAI Race Creates AI Readiness Perception Gap as Many Enterprises Bypass Adoption Best Practices
June 4, 2024
New Report Reveals C-Suite 2x to 5x Less Likely than AppSec Leaders to Acknowledge AI-related Security Concerns
BOSTON, MA — June 4, 2024 — Snyk, the leader in developer security, today released AI readiness report findings, “Secure Adoption in The GenAI Era.” While many global enterprises have already adopted generative AI (GenAI) code generation tools to speed application development, these report findings indicate that, in many cases, adoption best practices have been hurried or ignored in the quest to join the GenAI race as soon as possible. The findings also show a clear perception gap related to associated security concerns as a result of GenAI code creation, with C-Suite leaders displaying more eagerness and confidence than application security (AppSec) leaders and in some cases even developers.
Specifically, the report found:
Only 20% of organizations ran a proof of concept (POC) before introducing AI coding options, despite also noting that 58% said security was their biggest barrier to adoption;
Less than half (44%) of organizations provided their developers with AI-coding tool training; and,
CTOs and CISOs were 5x more likely than developers to believe AI coding tools pose no risk and 2x more likely than developers to believe they are “extremely ready” to adopt AI coding tools.
“The GenAI Era has arrived, and there is no ‘putting the genie back in the bottle.’ We believe it's now incumbent on the cybersecurity industry to recommend clear guidelines that will allow all of us to benefit from this increased productivity, without the associated security sacrifices,” said Danny Allan, Chief Technology Officer, Snyk. “This latest research also clearly shows that the scaling of AI-coding tools must be a collaborative effort. CTOs should aim to work side by side and trust their DevSecOps team leaders so that together we can safely reap the full benefits of GenAI over the long-term.”
Closer to the Code = More Heightened Concerns
The security of AI-generated code was not a major concern for the majority of organizations surveyed. Almost two-thirds (63.3%) of respondents rated security as either “excellent” or “good,” with only 5.9% rating it as bad. However, a deeper look at these numbers reveals those “closer to the code” don’t express the same confidence as many of their colleagues.
Nearly four in ten (38.3%) of AppSec personnel said AI coding tools were “very risky.” AppSec respondents also took issue with their organizations security policies concerning AI coding tools. Almost a third (30.1%) of AppSec team members said their organization’s AI security policies were insufficient, compared to 11% of C-suite respondents and 19% of developers/engineers.
Nearly 1 in 5 (19%) of C-suite respondents said AI coding tools weren’t “risky at all,” while only 4.1% of AppSec respondents agreed with that statement.
Best Practices Key As GenAI Adoption Continues to Soar
The data shows that top technology decision makers — CISOs and CTOs — believe their organizations are already ready for AI-coding tools. In fact, 32% of C-Suite respondents described the rapid adoption of AI coding tools as critical — twice as many as AppSec respondents.
This means that regardless of AppSec and/or developer concerns, further adoption of these tools is on the way (and in many cases already here). While continuing on this path, these organizations should urgently implement the proper security actions that will allow them to continue to scale the rapid adoption of these tools.
Immediate Recommended Actions Include:
Establish a formal POC process for the adoption of all new AI technology;
Value and prioritize AppSec team feedback regarding GenAI security concerns;
Document and audit all instances of AI code generation tools;
Invest in security technology that provide “AI guardrails” to the adoption of AI-assisted tools over the long-term; and,
Enhance and continue to augment company-wide AI training.
To further explore these report highlights visit here.