Snyk Organizational AI Readiness Report

A survey of technology team members found that most believed their organizations were ready for AI coding tools but worried those tools introduced a security risk. Organizations were failing to adopt basic preparedness steps, such as running a proof-of-concept or providing developer training. Respondents directly exposed to AI coding tools and AI-generated code in their daily workflows were more concerned about AI code quality and risks.

Introduction

According to Snyk’s 2023 AI Code Security Report, 96% of coders use generative AI tools in their workflows. Organizations that build software understand they must adopt these tools to keep up with competition and attract and retain talent. Bringing AI coding tools into the software development lifecycle introduces various security and operational challenges. How ready are technology leaders and their teams for the new era of AI coding tools? And how are they preparing for this significant shift in how software is written? 

Snyk asked over 400 technologists a range of questions designed to gauge the AI readiness of their organizations and to measure their perceptions of AI coding tools. The survey covered three groups: C-suite technology executives, application security teams, and developers/engineers. These groups had different views on the security of AI coding tools and code, the efficacy of AI code security policies, and organizations' preparedness for AI coding. In this report, we outline the most notable research findings.

Part 1

Orgs are confident in their AI readiness, particularly leadership

Organizations generally feel confident that they are ready and prepared to adopt AI. In response to questions that either directly or indirectly question AI readiness, the majority of organizations are moving quickly to adopt AI to the point of short-circuiting standard use case analysis and product testing before deployment. For their part, C-suite respondents are both more sure their organizations are prepared to adopt AI and more sure that their AI tools are secure.


C-Suite is more confident that their organization is ready for AI coding tools

How ready is your organization for Al coding tools?

CTO/CISO

AppSec/Sec

Dev/Eng

50%

40%

30%

20%

10%

0%

0%

10%

20%

30%

40%

50%

Extremely ready

Very ready

Ready

Somewhat ready

Not ready at all

Excellent AI readiness? CISOs and CTOs 2x more likely than Devs

Across all three role types, a majority of respondents said that their organization was “extremely ready” or “very ready” for AI coding tool adoption. Less than 4% said their organizations were not ready. However, C-suite respondents are more confident than other groups of respondents that their organization is primed and ready for AI coding tool deployment and adoption. 40.3% of that group rated their organization as “extremely ready” compared to only 26% of AppSec team members and 22.4% of developers. There was no significant difference between CISOs and CTOs, which seems counterintuitive given the security and risk focus of CISOs. This could be due to the intense pressure on technology leadership to quickly roll out AI coding tools and accelerate software development processes. Other groups' reluctance likely reflects on-the-ground concerns about specific readiness issues around security, training, code quality, and other implementation-layer details.

CTOs and CISOs more strongly favor ASAP AI coding tool adoption

Among C-Suite respondents, 32.5% felt the rapid adoption of AI coding tools is “critical.” This means they are almost twice as likely to see adoption as urgent compared to AppSec respondents. Developers were more enthusiastic than AppSec but still lacked C-Suite enthusiasm levels. This intensity likely reflects strong demands from the Board of Directors and CEOs that CTOs move quickly to embrace AI.

How important is it for your organization to adopt Al coding tools ASAP?

CTO/CISO

AppSec/Sec

Dev/Eng

40%

30%

20%

10%

0%

0%

10%

20%

30%

40%

Extremely important

Most respondents believe AI coding tool security policies are good

Across all three response groups, the majority of respondents, including more than two-thirds of C-Suite respondents and developers, found their organization’s AI coding tool policies to be adequate. Only a very small percentage found the policies to be overly restrictive. However, a far greater percentage of security practitioners found the policies insufficient, indicating that AppSec and security respondents still see risks in AI code security practices at their organizations.

How would you describe your organization's security policies for Al coding tools

CTO/CISO

AppSec/Sec

Dev/Eng

80%

60%

40%

20%

0%

0%

20%

40%

60%

80%

Insufficient

Adequate

Excessive

63.3% rate AI-generated code security highly

Roughly two-thirds of respondents rated the security of AI-generated code as either “excellent or “good”. Only 5.9% rated it as “bad”. The sentiment towards AI-generated code is positive among the entire sample, echoing positive sentiments about policies governing AI coding tool use and adoption.

How would you rate the security of Al generated code?

Bad

5.9%

OK

30.8%

Good

44.3%

Excellent

19%

Part 2

Organizations fear AI coding security but aren’t making proper preparations

Despite strong positive responses about organizational readiness, security policies, AI code quality, and risk, respondents still cite security as the biggest barrier to AI coding tool adoption. In a seeming contradiction of this sentiment, they are also failing to take basic steps to minimize risk and prepare their organizations, such as running POCs and training developers on AI coding tools.

Security fears remain the biggest AI coding tool barriers

All three types of respondents agreed that security fears are the biggest concern in their organization about adopting AI coding tools, with roughly 58% across all types of respondents. Conversely, under half of respondents viewed lack of executive buy-in as a barrier. This finding matches the general viewpoints of AppSec practitioners and, to a lesser degree, developers. Still, it contradicts the generally positive view of AI coding tools and AI coding tool readiness expressed by the majority of respondents.

What barriers has your organization faced in adopting Al coding tools?

CTO/CISO

AppSec/Sec

Dev/Eng

60%

40%

20%

0%

0%

20%

40%

60%

Lack of executive buy-in

Security fears

Low developer adoption

Lack of preparation and training

Less than 20% of organizations did AI tool POCs

The standard process of introducing new technologies and tools into an organization is to do a feature and cost analysis and then run a “proof of concept” exercise with a small subsection of the team. This is how Pinterest’s platform engineering team addressed AI coding tool adoption. Our survey found that less than 20% of organizations undertook POCs as part of their preparation steps for adopting AI coding tools. Among all the preparation steps, POCs were by far the least utilized. Organizations were roughly one-third as likely to use a POC as other methods. 

Potentially, organizations viewed POCs as superfluous. Moreover, this finding applied equally to AppSec, CTO/CISO, and Dev/Eng respondents. While the majority of respondents indicated that their organization added more security tools and checks to prepare for AI coding tools, over one-third of organizations did not take this precaution. This implies that they either felt sufficiently secure in their existing software development practices to cover any new challenges brought by AI or that AI coding tools don’t necessarily add more risk to the software development lifecycle.

What preparation steps did your organization undertake before adopting Al coding tools?

CTO/CISO

AppSec/Sec

Dev/Eng

80%

60%

40%

20%

0%

0%

20%

40%

60%

80%

POC

Security review

AI tool training

Create AI policies and procedures

Add additional security checks, tools, and code reviews

None of the above

Only 44.8% of organizations gave the majority of developers AI coding tool training

Proper training is essential in adopting any new technology that could inject considerable security risk. However, considerably less than half of all respondents said their organizations provided AI coding tool training to the majority of their developers. This may reflect the ease of use of the tools or that many of the tools actually include security scanning as part of the workflow. That said, coding tools do not offer training on how users can spot mistakes that the tools have made, even though such security mistakes are common and well-documented.

Percentage of Developers Receiving Al Coding Tool Training

0-25%

20.4%

26-50%

34.7%

51-75%

30.8%

76-100%

14%

Part 3

Those who work more closely with code have greater doubts about security issues

AppSec teams tended to have a more negative view of the security risks of AI and how their organization was handling those risks. This included a lower opinion of AI-generated code security, a greater perceived risk from AI tools, and a dimmer view of the sufficiency of their organization’s AI security policies.

AppSec team 3x more likely to rate gen-AI code security as “bad”

While representing a small percentage of total responses, AppSec and security practitioners were 3x more likely than C-Suite respondents and significantly more likely than developers to state that AI-generated code was “bad”.  This divergence implies that those tasked with fixing and securing code may be alerted to the failures of AI tools more frequently than developers, who may not see the vulnerabilities and code errors, and C-suite members, who rarely touch code. On the opposite end of the spectrum, CTOs and CISOs were considerably more likely than developers working with AI-generated code on a daily basis to believe that the quality of generated code is “excellent”. This likely implies developers are more realistic about the actual quality of AI-generated code and are more exposed to flaws and problems that are common in AI-created code, according to Snyk’s own findings and academic research.

These findings raise several questions. First, are organizations broadly underestimating risk from AI coding tools? Respondents across all roles, on average, rated AI code quality with high marks. This is despite multiple academic research papers finding that AI-generated code consistently injects security risk and requires additional code reviews and remediation. (See this Snyk Webinar for a Live Hack exploit of AI-generated code). Second, if CTOs and CISOs overestimate the quality of AI-generated code, is this because they receive imperfect information or have little direct contact with those working with the tools? And why are they not on the same page as developers?

How would you rate the security of Al-generated code?

CTO/CISO

AppSec/Sec

Dev/Eng

30%

20%

10%

0%

0%

10%

20%

30%

Excellent

C-Suite 2x to 5x less likely to see security risk from AI coding tools

While respondents largely agreed that AI coding tools did not create an extensive risk, there was a large disparity among those who felt that AI is not risky at all. In our survey, 19.4% of C-Suite respondents said AI coding tools are “not risky at all,” while only 4.1% of AppSec team members agreed. Developers were closer to AppSec views, with 8.8% of Dev/Eng respondents saying that AI coding tools are minimally risky. Conversely, 38.3% of AppSec practitioners felt AI coding tools were “very risky” or worse, while only 29.8% of C-suite respondents agreed. One potential interpretation of this finding is that AppSec teams, which are much closer to daily remediation of flawed code and vulnerabilities, are seeing many more security issues emanating from AI tools than the C-suite, which tends to be more removed from daily security and coding activities.

How would you rate your organization's security risk from the use of Al coding tools?

CTO/CISO

AppSec/Sec

Dev/Eng

20%

15%

10%

5%

0%

0%

5%

10%

15%

20%

Not risky at all

AppSec practitioners 3x more likely to say AI security policies are insufficient

AppSec practitioners doubt their organization’s security policies for AI coding tools. Nearly three times as many respondents from AppSec roles described their AI coding tool policies as “insufficient” compared to the number of CTO and CISO respondents making the same observation. In comparison, developers and engineers feel in the middle, with only 19% of respondents saying their org’s AI policies are insufficient versus 30.1% of AppSec members. In other words, the further away someone in the technology organization is from security processes, the less likely they are to approve of AI security policies. This could be an indication that AppSec teams are seeing more risks. It also might mean they feel that AI security policies need to be constructed in a logical way that works with application security requirements. C-Suite respondents were the most likely to think these policies were excessive. This thinking may reflect their strong desire to accelerate AI coding tool adoption, as expressed in other questions on this survey.

How would you describe your organization's security policies for Al coding tools

CTO/CISO

AppSec/Sec

Dev/Eng

80%

60%

40%

20%

0%

0%

20%

40%

60%

80%

Insufficient

Adequate

Excessive

Conclusion

Organizations remain conflicted on their state of AI readiness and fail to take basic steps toward AI readiness and preparedness

Ready or not? Respondents are generally positive about the state of AI coding tool readiness in their organizations. They generally think their security policies are sufficient and that AI-generated code is secure. In the main, they believe they are ready for AI adoption. However, they remain conflicted on AI coding tool security. Across all roles, security fears are perceived as the biggest barrier to entry of AI coding tools. In terms of practical processes to prepare, less than one-fifth of respondents said their organizations ran PoCs, a basic step that is fundamental to new technology adoption. And less than half of respondents said that the majority of their developers had received AI coding tool training. These contradictions may indicate a lack of planning and strategy, as well as a lack of structure around AI adoption. 

Diving deeper, survey respondents demonstrated a consistent divergence by role in their perceptions of code quality, tool safety, and general organizational preparedness. The C-suite held a more positive view of AI coding tools and preparedness than respondents who work closer to the code or security processes and policies. In particular, security team members held a dimmer view of AI coding tool security, implying that this influential group is exposed to more problems generated by AI coding and is reacting accordingly. 

The above contradictions imply insufficient planning or cohesive strategy around AI coding tool adoption, as well as a lack of structure in determining and fulfilling necessary pre-conditions, potentially because of a lack of consistent cross-organizational visibility. This may have happened because, like with smartphones and certain consumer software products, adoption was initially rapid and uncontrolled before being institutionalized by IT organizations. In that sense, rollouts might have been initially chaotic and challenging to control later on. The bottom line, however, is that organizations should consider a more structured approach to AI coding tool adoption and security that is closer to the adoption processes of other types of enterprise software. Taking this approach should also resolve security fears and also address outsized concerns of developers and security teams. It will do this by putting better checks and balances in place and providing a more holistic, methodical, and programmatic approach to deploying a fundamental shift in the software development process.

Technology leaders listening to the signals from this survey could benefit from the following actions:

Set up a formal POC process for AI tool adoption.

Give more weight to recommendations from those most directly exposed to code security issues and tool risks.

Document and audit all instances of AI code generation tools to better inform security and QA processes.

Take regular pulse surveys of all three groups on AI coding topics.

Consider engaging expert guidance on AI best practices.

Drive executive buy-in with tools that demonstrate ROI for AI security tools.

Adopt security tools that prevent and fix security incidents early in the development cycle.

Adopt developer-friendly AI security tools and practices that do not slow them down and fit into existing workflows.

Increase education and training for AI-generated code and tools to improve awareness and judgment. Use an AI coding assistant (Google Gemini) that integrates directly with Snyk and its extensive knowledgebase.

Methodology

For this report, we surveyed 406 IT professionals from around the world. Snyk limited the survey to respondents who described their roles as “CTO”, “CISO”, “developer”, “engineer”, “security”, or “appsec”. Snyk intends to continue collecting data for this survey at online and offline events throughout 2024 to paint an even broader picture of enterprise AI readiness and differences in perceptions of AI risks, preparedness, and challenges.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk (スニーク) は、デベロッパーセキュリティプラットフォームです。Snyk は、コードやオープンソースとその依存関係、コンテナや IaC (Infrastructure as a Code) における脆弱性を見つけるだけでなく、優先順位をつけて修正するためのツールです。世界最高峰の脆弱性データベースを基盤に、Snyk の脆弱性に関する専門家としての知見が提供されます。

無料で始める資料請求

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon