2023 AI Code Security Report

AI code, security, and trust in modern development

56.4% say insecure AI suggestions are common — but few have changed processes to improve AI security. Despite clear evidence that these systems consistently make insecure suggestions, security behaviors are not keeping up with AI code adoption.

Executive summary

In a short period of time, AI code completion tools have gained significant market penetration. In our survey of 537 software engineering and security team members and leaders, 96% of teams use AI coding tools, making them part of the software supply chain. Despite their high levels of adoption, AI coding tools consistently generate insecure code. Among respondents, over half said that AI coding tools commonly generate insecure code suggestions. At the same time, less than 10% of these organizations automate the majority of their security scanning. On top of that, 80% of developers bypass AI code security policies. To mitigate risks from AI coding tools and overreliance on AI, technology teams need to put in place security measures to handle the higher pace of code development, more fully automate security processes, and educate teams on using AI suggestions securely.

Part one

Risks of outsourcing code security to AI

Survey responses indicate that AI code completion continues to inject security risks into the development process. What’s more, developers are actively bypassing AI usage policies for coding. All of this is happening without putting in place proper guardrails, such as automated code scanning. Open source code is a particular risk as AI coding tools speed up velocity and suggest open source modules, but teams are not programmatically validating that suggested open source components are secure.

AI coding tools generate insecure code. Developers ignore this fact.

In December 2022, StackOverflow banned all AI-generated submissions from ChatGPT to its coding Q&A site, stating, “The average rate of getting correct answers from ChatGPT is too low." Their assertion echoed findings from multiple respected academic studies from New York University and Stanford University finding that AI coding completion tools consistently made insecure suggestions and that coders relying heavily on the tools wrote more insecure code. 

In our survey, 75.8% of respondents said that AI code is more secure than human code. This massive discrepancy is indicative of major problems with the way organizations are securing their development process against AI coding tools and educating their technology teams on the known risks of AI for code generation.

Percentage of coders submitting secure answers to coding questions (Using AI vs Not Using AI)

Using AI

Not using AI

80%

60%

40%

20%

0%

0%

20%

40%

60%

80%

Encryption & Decryption

Signing a Message

Sandboxed Directory

SQL

Source: “Do Users Write More Insecure Code with AI Assistants?”, Stanford University

56.4% commonly encounter security issues in AI code suggestions

Despite voicing strong confidence in AI code completion tools and demonstrating strong adoption of the tools, respondents acknowledge that AI does introduce security issues. 56.4% admit that AI introduces coding issues sometimes or frequently. 

This indicates that AI tools require verification and auditing for all suggestions due to the high rate of potentially flawed code produced. Despite the fact that respondents say that security issues with code suggestions are common, 75.4% of respondents rated the security of AI code fix suggestions as good or excellent — indicating a deep cognitive bias that is extremely dangerous for application security.

How frequently do you encounter a security issue due to code suggested by an AI tool?

Frequently

20.5%

Sometimes

35.9%

Rarely

34.6%

Never

5.8%

Not Sure

3.2%

How would you rate the security of AI code fix suggestions?

60%

50%

40%

30%

20%

10%

0%

0%

10%

20%

30%

40%

50%

60%

Excellent

Good

Fair

Poor

Not applicable

Excellent

Good

Fair

Poor

Not applicable

80% bypass security policies to use AI, but only 10% scan most code

While most organizations of respondents had policies allowing at least some usage of AI tools, the overwhelming majority reported that developers bypass those policies. In other words, the trust in AI to deliver code and suggestions is greater than the trust placed in company policy over AI.

This creates tremendous risk because, even as companies are quickly adopting AI, they are not automating security processes to protect their code. Only 9.7% of respondents said their team was automating 75% or more of security scans, even though developers using AI tooling are likely producing code more quickly. This lack of policy compliance plus increased code velocity makes automated security scanning even more important than ever before.

How often do developers in your organization bypass security policies in order to use AI code completion tools?

All the time

23.1%

Most of the time

31.8%

Some of the time

25%

Rarely

12.7%

Never

7.4%

What percentage of your security scanning is automated?

50%

40%

30%

20%

10%

0%

0%

10%

20%

30%

40%

50%

1-25%

26-50%

51-75%

76-100%

1-25%

26-50%

51-75%

76-100%

“By using Snyk Code’s AI static analysis and its latest innovation, DeepCodeAI Fix, our development and security teams can now ensure we’re both shipping software faster as well as more securely.”

ICE/NYSE

Steve Pugh

CISO, ICE/NYSE

AI further exposes open source supply chain security

In the survey, 73.2% of respondents said they contributed code to open source projects. So the average survey respondent is knowledgeable about open source. Despite this understanding, few use more advanced and reliable security practices to validate that code suggestions from AI coding tools are secure. Only 24.6% used software composition analysis (SCA) to verify the security of code suggestions from AI tools. Increased velocity would likely increase the speed at which unsafe open source components are accepted into code. 

Because AI coding systems use reinforcement learning algorithms to improve and tune results, when users accept insecure open source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case. This can create a dangerous feedback loop and potentially lead to more insecure suggestions.

Do you use AI code completion tools for work on open source projects

Yes

83.2%

No

16.8%

How do you verify the security of open source packages and libraries included in AI-generated code suggestions?

60%

40%

20%

0%

0%

20%

40%

60%

Check information in the registry or package manager

Repository ratings

Community activity

Verify a responsible disclosure policy (such as a SECURITY.md)

Security scorecard

SCA tool

Code reviews

Do not check the safety of open source packages suggested by AI tools

Check information in the registry or package manager

Repository ratings

Community activity

Verify a responsible disclosure policy (such as a SECURITY.md)

Security scorecard

SCA tool

Code reviews

Do not check the safety of open source packages suggested by AI tools

AI considered part of software supply chain, but few change practices

55.1% of respondents said that their organizations now consider AI code completion to be part of their software supply chain. This view has not resulted in correspondingly significant changes to application security processes driven by AI. While the majority of respondents said their team had made at least one change in software security practices as a result of AI code completion tools, the overall percentages on this multi-selection were on the low side.

The highest percentage change was increasing security scans at 18.4% of respondents. This lack of change could be attributed to the false perception that AI code suggestions are more secure than human code. Ultimately, significant changes in the way we work usually necessitate a review and corresponding change in risk management, to address new/additional risks brought about by the novel way of working.

Does your organization consider AI code completion to be part of its software supply chain?

Yes

55.1%

No

22.7%

Not sure

19.4%

Not applicable

2.8%

How has your organization changed your software security practices as a result of AI code completion?

20%

15%

10%

5%

0%

0%

5%

10%

15%

20%

More frequent code audits

More detailed code audits

More frequent security scans

Added new tooling

Implemented security automation

Added news security processes (e.g. SBOMs, SALSA)

Has not changed at all

More frequent code audits

More detailed code audits

More frequent security scans

Added new tooling

Implemented security automation

Added news security processes (e.g. SBOMs, SALSA)

Has not changed at all

Part Two

Developers recognize risks of AI blindness, reliance

Even though developers perceive AI-written code to be secure, they overwhelmingly worry that AI code completion tools will create greater insecurity and that they will become over-reliant on the tools. In organizations that restrict AI usage, problems with code quality and security are the primary reasons for restrictions. Respondents acknowledge that a significant percentage of AppSec teams are struggling to keep pace with higher code velocity. All of this points towards a need to prioritize process and technology utilization changes – more automated security scanning – with continued education of development teams, so that they can be more aware of the real risks of AI code suggestions.

86% are concerned about AI security, indicating cognitive dissonance

The overwhelming majority of respondents expressed concerns about security implications of using AI code completion tools. This appears to contrast with the strong confidence in the ability of AI coding tools to generate secure code and to make code suggestions to improve security.

That cognitive dissonance is potentially a result of herd mentality, where developers believe that because everyone else is using AI coding tools, they must be trustworthy and that drives their actions. But at a more contemplative level, they understand the risks and recognize that AI may inject more insecure code than they realize or can easily see without more comprehensive security measures.

How concerned are you about the broader security implications of using AI code completion tools?

Very Concerned

37.1%

Somewhat Concerned

49.9%

Not Concerned

13%

Security, data privacy concerns are main reasons for AI code restrictions

For the small subset of companies that restrict AI coding tools in part or in whole, the most common concern behind the restrictions was code security (57%) followed by data privacy (53.8%) and code quality (46.4%). All of the major concerns for restricting AI were related to security, reflecting leadership concerns about potential negative or unmitigated impacts of AI code completion. 

If your organization restricts the use of AI coding tools, what are the reasons for the restrictions?*

60%

40%

20%

0%

0%

20%

40%

60%

Security Concerns

Data Privacy Concerns

Quality Assurance Concerns

Cost Concerns

Lack of Management Buy-In

None of the above

Security Concerns

Data Privacy Concerns

Quality Assurance Concerns

Cost Concerns

Lack of Management Buy-In

None of the above

Developers are concerned about AI overreliance

A common concern is that developers using AI will become overly reliant on the coding tools and lose their ability to write code on their own or to perform key coding tasks that they commonly use AI for. In some research, knowledge workers that overly rely on strong AI become less likely to recognize good solutions, which may be atypical or out of pattern. Respondents shared this concern, with 45% saying they were somewhat concerned and 40% saying they were very concerned. In other words, they appear to be aware of the risks of outsourcing too much of their craft to AI.

How concerned are you that developers are relying too much on AI code completion tools?

Very Concerned

40%

Somewhat Concerned

46%

Not Concerned

14%

58.7% of AppSec teams are struggling to keep up

Since AI coding tools have improved productivity and likely have increased the velocity of code production, if not the number of lines of code produced, we asked whether this was putting more pressure on AppSec teams. Respondents said that over half of all AppSec teams are struggling to some degree, with one-fifth struggling significantly to keep up with the new pace of AI-driven code completion. This is to be somewhat expected, if the productivity boost from AI code completion tools is meaningful. It also underscores the challenges created by adding more pressure to a process that even prior to AI often struggled to keep up with the pace of software development.

Is your AppSec or security team struggling to adapt to the speed of development due to AI code completion?

Struggling significantly

20.5%

Struggling moderately

38.2%

Coping well

35%

Not affected

6.3%

Conclusion

To fix the AI infallibility bias, educate and automate security

There is an obvious contradiction between developer perception that AI coding suggestions are secure and overwhelming research that this is often not the case. The tension is underscored by seemingly contradictory responses found in this survey; most respondents (including security practitioners) believe AI code suggestions are secure while also simultaneously admitting that insecure AI code suggestions are common. 


This is a perception and education problem, caused by groupthink, driven by the principle of social proof and humans’ inherent trust in seemingly authoritative systems. Because the unfounded belief that AI coding tools are highly accurate and less fallible than humans is circulating, it has become accepted as fact by many. The antidote to this dangerous false perception is for organizations to double down on educating their teams about the technology they adopt, while securing their AI-generated code with industry-approved security tools that have an established history in security.

About this report

The survey contained 30 questions covering how organizations perceive and use AI code completion tools and generative coding. The survey polled 537 respondents working in technology roles. Of the panel, 45.3% were from the United States, 30.9% were from the United Kingdom, and 23.6% were from Canada. We asked respondents to self-identify their roles, choosing all titles that applied. The higher percentage selected were developer management (42.1%), developer (37.6%), IT management (30.9%), and security management (30.7%), indicating that the panel included a significant portion of respondents from management. Respondents were spread broadly across various sectors. SaaS/Technology represented the largest pool of respondents (21%) and the only sector representing greater than 20% of responses. Only two other sectors, business services (17.1%) and financial service/finech (11.2%) represented more than 10% of respondents. The survey panel was predominantly smaller companies, with 48.6% of respondents working at companies of 500 employees or less and only 12.8% working at companies of greater than 5,000 employees. Respondents also used a wide variety of coding tools. The largest percentage cited ChatGPT (70.3%) with Amazon CodeWhisperer (47.4%),  GitHub Copilot (43.7%)Microsoft’s VisualStudio Intellicode (35.8%), and Tabnine (19.9%) ranked afterwards. This was a multi-select question and the high percentages across multiple responses indicates that respondents are likely using multiple AI coding tools in their jobs, potentially for different reasons or tasks.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon