Skip to main content

Does AI lead to AppSec hell or nirvana?

Written by:
blog-feature-ai-blue

October 3, 2023

0 mins read

The use of artificial intelligence in every area of life — from writing papers to maintaining critical infrastructure to manufacturing goods — is a controversial topic. Some are excited about the possibilities that come with AI/ML tech, while others are fearful and reticent. These differing opinions raise a fundamental question: will AI turn our modern-day society into a utopia or a dystopia?

These ongoing discussions also raise a question more specific to our industry: is using AI when building and securing applications a good idea, or will it just lead to disaster? Ravi Maira, VP of Product Marketing, and Randall Degges, Head of DevRel & Community, discussed using AI in AppSec during a LinkedIn live discussion in June 2023.

Read on to catch some of the highlights from their talk, including how AI can help businesses, where to be cautious of its inherent flaws, and what to expect from AI in application development and security in the future. 

Why AI matters to today’s businesses

When used correctly, artificial intelligence can boost development productivity and innovation. Here are a few ways that AI improves coding, security, and productivity:

AI is a production multiplier

AI can dramatically accelerate productivity for developers and security professionals by producing more precise answers and connecting the dots during research. 

Randall gave a real-world example of using AI for research: “Last night, I was working on a project. And I downloaded one of our podcast episodes from one of the podcasts that we did…But it was too big for the tool I was using… my default instinct was to go to Google and say, ‘What can I use to break up this mp3 file into smaller mp3 files?’ I could load up like a big audio editing tools suite. But that's a lot of work. So I just asked the question in ChatGPT. And it spat out an FFmpeg Command Line command that I could run to get the entire output done. That would’ve taken me a while to figure out.”

Developers can code better with AI

In addition to using AI for research, developers can use AI-driven tools to understand complex code. If a developer uses a search engine to find new code, they will have to work hard to understand it — a grueling process that takes lots of time and effort. By contrast, a solution like Beto uses AI to explain each line of code to developers, making it far easier for them to understand and work with new code.

AI backs some of today’s most powerful AppSec tools

AI can boost application security efforts as well. While it’s clear that a single LLM model isn’t advanced enough to work in isolation, it becomes a powerful solution once combined with other technologies. Ravi said, “It's this notion of ‘don't just use one model, but use different methods of machine learning and AI at the right time on one specific goal.’”  

The risk of using AI in AppSec

Although AI can be helpful, it’s important not to get too swept away in the excitement of this new technology. When used the wrong way, AI can cause lots of damage. Ravi and Randall mentioned a few risks that arise when using AI for application security and development.

Hallucination

Hallucination — the term used to describe when an AI generates false information — can cause problems wherever people use AI, including in coding and security practices. So, developers and security professionals must be very cautious when they use AI to generate research answers, code, etc. 

This level of unreliability means that teams still need to do their own research, training, and knowledge-sharing. It also means that teams should follow a “trust, but verify” mentality. Information generated by a single AI tool must be backed up by human research or other resources such as security tooling. 

Low-quality AI Inputs

Ravi and Randall also mentioned that AI outputs can only be as good as the inputs. Randall explained, “one of the fundamental rules of using LLM is the more context you have, the better quality output you will have. So if you have a blank ChatGPT session and you're like, ‘Hey, build a Python function that does this’, and you give it a very vague definition, you're probably gonna get some sort of random output, regardless of how you play around with the temperature settings in the underlying models. [Instead], give it more context, like, ‘Look, I have this entire file or this directory of files.’” 

But, even if you write a highly specific prompt (which can be challenging), there’s no guarantee that the results will be perfect. The AI-generated results will only be as good as the samples that the model learned from. According to Ravi, “[AI is] learning on insecure code. So when you have this question of, ‘is ChatGPT going to just solve the problem of insecure code?’...the answer is not today…They've learned on insecure code; therefore, they're generating insecure code.” 

Additionally, because of the way large language models (LLMs) work, there are a variety of security concerns around AI inputs. A few of these risks include prompt injection (for user-facing applications), as well as more complex problems around deceptive “leading” inputs. For example, if you supply a chunk of code to ChatGPT and ask it to find any relevant security issues in the code, but your code has misleading variable/class names,  you are likely to get unexpected outputs with misleading information because of the generative models.

The future of AI in application development

As AI continues to become prevalent in our world, particularly in application development and security, Ravi and Randall see a few trends rising. First, they predict AI will empower “civilian coders”: people with no previous development experience creating code with these tools. 

They also foresee that the need for strong application security tooling will continue to grow — especially tools that smartly leverage AI. The tools that will rise to the top in this ever-evolving industry will lean on AI for innovation but use humans or additional technology to cross-reference every AI-generated claim.

To learn more about AI in AppSec, check out Ravi and Randall’s entire conversation on LinkedIn

Posted in: