Skip to main content

Secure AI tool adoption: Perceptions and realities

feature-ai-ide-dark

2024年6月4日

0 分で読めます

In our latest report, Snyk surveyed security and software development technologists, from top management to application developers, on how their companies had prepared for and adopted generative AI coding tools. While organizations felt ready and believed AI coding tools and AI-generated code were safe, they failed to undertake some basic steps for secure adoption. And within the ranks, those close to the code have greater doubts about AI safety than those higher up in management. This survey included 459 IT professionals globally, including roles such as CTO, CISO, developer, engineer, and AppSec. Snyk plans to continue collecting data throughout 2024 to provide a broader understanding of enterprise AI readiness and perceptions of AI risks and challenges.

While this blog post introduces you to the topic, visit our interactive webpage for additional findings, helpful graphics, and the full Snyk Organizational AI Readiness Report.

Less than 20% of organizations conducted AI tool POCs

Despite the standard practice of running proof of concept (POC) exercises before deploying new technologies, less than 20% of organizations followed this step for AI coding tools. The broad availability and low entry barriers of these tools likely contributed to their ad hoc adoption without running a POC to identify risks and design adequate security guardrails. While many organizations added security measures, over one-third did not, suggesting a perceived adequacy in their existing practices or a belief that AI tools do not introduce significant new risks. This was surprising, given the radical change that AI-generated code introduces into the software development lifecycle. (SDLC)

C-Suite more positive on AI readiness than others

C-suite respondents exhibited great confidence in their organization's readiness for AI coding tools, with 40.3% rating their organization as "extremely ready," compared to 26% of application security team members and 22.4% of developers. This confidence may reflect the pressure on technology leadership to implement AI tools rapidly. It also might reflect that leadership does not work directly with AI coding tools and does not consistently review AI code, so it has little direct knowledge of the downside risks.

AppSec most worried about “bad” AI code and security policies

Application security (AppSec) teams are twice as likely as developers and engineers to rate AI-generated code security as "bad." Conversely, C-suite respondents are more optimistic, with 29.8% rating it as "excellent." This discrepancy suggests again that those responsible for fixing and securing code are more aware of the vulnerabilities and errors introduced by AI tools. AppSec practitioners are three times more likely to describe their organization's AI security policies as "insufficient" compared to C-suite respondents. This indicates a gap between those developing and enforcing security policies and those overseeing broader technology adoption.

Security fears are the biggest barrier to AI adoption

While nearly everyone said that AI for code is inevitable, a significant percentage cited ongoing concerns. Across all respondent groups, security fears are identified as the most significant barrier to adopting AI coding tools, with roughly 58% expressing this concern. This largely contradicts other survey findings that AI-generated code is mostly secure. The shared concern about security underscores the need for robust policies, measures, and more specific planning and criteria on adoption practices. 

Conclusion: Room for improvement in AI adoption and security practices

The majority of organizations are adopting AI coding tools. AI-generated code is becoming an accepted part of the software development lifecycle, deeply embedded in developer workflows. However, AI coding tools are novel and may introduce serious risks. This reality is expressed by more substantial concerns about AI code and coding tools among developers and AppSec practitioners, even as the C-suite remains overwhelmingly positive on AI coding. To ensure that AI coding risk is adequately understood and managed, CTOs, CISOs, and their teams should create AI adoption playbooks and criteria and build a more systematic approach to introducing new AI-powered tools for code and technology management. Recommended steps include:

  • Implement formal POC processes for AI tools.

  • Prioritize feedback from AppSec teams regarding code security and tool risks.

  • Ensure that everyone touching the tools and code receives sufficient training.

  • Collect and analyze instances of flawed AI-generated code to inform security and QA processes.

  • Conduct regular surveys to align views on AI readiness and security across all groups.

Don't forget to visit our interactive webpage for additional findings, helpful graphics, and to download the full Snyk Organizational AI Readiness Report.

feature-ai-ide-dark

SDLCにおけるAIのベストプラクティス

チートシートをダウンロードして、安全にSDLCでAIを活用するためのベストプラクティスを学びましょう。