Episode 142

Season 8, Episode 142

The AI Security Report

Listen on Apple PodcastsListen on Spotify PodcastsWatch on Youtube

Episode Summary

In this episode of The Secure Developer, our co-hosts Simon Maple and Guy Podjarny discuss the rise of AI in code generation. Drawing from Snyk's 2023 AI Code Security Report, they examine developers' concerns about security and the importance of auditing and automated controls for AI-generated code.

Show Notes

In this compelling episode of The Secure Developer, hosts Simon Maple and Guy Podjarny delve into the fascinating and fast-paced world of artificial intelligence (AI) in code generation. Drawing insights from Snyk's 2023 AI Code Security Report, the hosts discuss the exponential rise in the adoption of AI code generation tools and the impact this has on the software development landscape.

Simon and Guy reveal alarming statistics showing that most developers believe AI-generated code is inherently more secure than human-written code, but they also express deep-seated concerns about security and data privacy. This dichotomy sets the stage for a stimulating discussion about the potential risks and rewards of integrating AI within the coding process.

A significant point of discussion revolves around the need for more stringent auditing for AI-generated code and much tighter automated security controls. The hosts echo the industry’s growing sentiment about the importance of verification and quality assurance, regardless of the perceived assurance of AI security.

This episode challenges conventional thinking and provides critical insights into software development's rapidly evolving AI realm. It's an insightful listen for anyone interested in understanding the interplay of AI code generation, developer behaviors, and security landscapes.


Follow Us


"Simon Maple: Now, three-quarters of respondents think that AI-generated code is more secure than human code. And this worries me in a couple of aspects because –" 

"Guy Podjarny: I wonder how many of them would have said that it's more secure than their code versus human code at large. I write better code. But human code n is not as secure as AI code."

[0:00:25] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security and much more. The podcast is part of the DevSecCon Community, found on devseccon.com, where you can find incredible Dev and security resources and discuss them with other smart and kind community members. 

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers build secure applications without slowing down. Fixing vulnerabilities in code, open-source containers and infrastructure as code. To learn more, visit snyk.io/tsd. 


[0:01:12] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today we're going to have a conversation. I get to grill Simon a little bit over here. But really, we're both going to chat a little bit about a new report that we just released at Snyk about AI and code security. Clearly a kind of nascent space and one that would evolve. And we're all learning our way through it. 

There are some interesting topics here to both share the data from the report, which we will link to in the show notes. And just sort of discuss, chit-chat a little bit about it. Start sharing the opinions we've been forming. Simon, excited to sort of have this conversation with you jumping in here.

[0:01:48] Simon Maple: Absolutely. And it's nice to be on the other side sometimes to have some questions thrown at me on the podcast.

[0:01:52] Guy Podjarny: I'm going to try to be as hard as I can. At some point – no. I'm just joking. You weren't that bad in – 

[0:02:02] Simon Maple: I know. I've learned for many years now to be able to give an elusive answer. We'll be fine.

[0:02:07] Guy Podjarny: Just to recall, you have a fondness of questions of something like what's the favourite animal? What fruit would you be? If you could be any fruit, what fruit could you be? Maybe we should start with that. 

[0:02:16] Simon Maple: Go on then. 

[0:02:17] Guy Podjarny: If you could be any fruit, which fruit would you be? 

[0:02:20] Simon Maple: Wow. You know what? I think I might be a papaya. Because it's just – I was about to say because it's so interesting. But now it's making me sound like very full of myself. 

[0:02:29] Guy Podjarny: It's exotic.

[0:02:30] Simon Maple: A papaya is a very – is such an interesting, beautiful fruit that – gosh. Now I'm just naming my favorite fruit. And now I feel embarrassed having to connect to it. 

[0:02:39] Guy Podjarny: It's known to be growing also at all sorts of towns around the UK. It heralds from the same locations.

[0:02:46] Simon Maple: Yeah. Yeah. 

[0:02:47] Guy Podjarny: Anyways, I think we – if we haven't lost the listeners quite by yet, a little bit into AI security here on it. Simon, can I ask you, tell us a little bit about this report that we just issued? Just give us a bit of context about what is the survey? What is it about? 

[0:03:02] Simon Maple: Yeah, absolutely. At Snyk, we've done a number of surveys quite similar to this actually in the past in and around whether it's supply chain topics or Synk’s, state of open source security report, which has done for many, many years. This is our first one in and around AI. And we actually focused the report in and around the AI coding system. The report is called the 2023 AI Code Security Report. 

And it's very much – it is a catchy title. It rolls off the tongue. It's very much focused on an organization's, I guess adoption, the usage. But also, very focused on the sentiment, which is why it was very survey-driven in and around AI coding assistance. 

There were a number of people over 500 software engineering and security folks that were interviewed for this or rather filled out the survey. And that was only done quite recently. And I know the speed at which AI sentiment and adoption changes, it's important for that to be done recently. It was only October of this year that it was done. And yeah, we cover a ton of different questions about the usage, people's concerns that they would have over it. And so, yeah, there's a ton of great information that we can dig into.

[0:04:07] Guy Podjarny: Yeah, for sure. We'll dig into the data. But maybe we take a moment to think a little bit about the sort of the survey report. I think one of the interesting things with the state of open source security that we've been running for a bunch of years is that, in a while, while the report is oftentimes indeed sort of sentiment and impressions and how do you think or opinion-based. It doesn't always fully conform to the realities or to the fact of it. 

The perception and aspect saying it changed over time. Quite clearly, in developer security, we've sort seen an increase in both sense of ownership and activities and actual kind how well people are handling it. I think the trend lines are very valuable, right? 

And I think with AI coding systems, nobody really has firm opinions yet. Because it's all really quite new and novel. But I guess I always find that I take some issue with the factual aspects of the data. It's easy to poke holes and you have to – there's always like a little bit of a selection bias of the people that opted in and answered, et cetera. 

But I'm interested to learning it. But maybe I'd encourage everybody to think a little bit about this both in terms of current status, but also interesting to maybe forecast or think about where is it going to get to and how will this trend as we do the next one in whatever – a years' time. 

[0:05:27] Simon Maple: Yeah. Absolutely. And I think actually when you think about survey data and sentiment data as well, I think this is possibly more interesting right now in and around AI compared to perhaps some of the other topics that we've done these types of surveys around. Because there were more – I guess the stability of open source when open source had been around for tens of years in common usage among developers is quite different to how we perceive AI right now. 

And I feel like if we'd have done this survey today versus 6 months or a year, we could very well have got quite different answers. And I think sentiment data at this stage is actually quite interesting. It's not necessarily things that we want to purely make decisions only on this. But it's really important, I feel, that we should look at this sentiment data as just a single data point among others when thinking about what are potential issues that we could have. And using other data sources as well to back that up both internally within our own organizations as well as across the market. But no, absolutely. I agree.

[0:06:23] Guy Podjarny: Yeah. Absolutely. And we're recording this episode after the horrendous weekend of OpenAI with Sam Altman going out and back into leadership at OpenAI. But the survey itself happened before. We might already need to rerun it so to see faith in AI has grown or diminished as part of that exercise. But maybe just like a little bit of contextualization of a dramatic domain, shall we say.

[0:06:47] Simon Maple: Yeah.

[0:06:47] Guy Podjarny: I guess let's maybe start a little bit on the adoption side. What does the survey say about kind of usage or adoption of AI coding assistance in development? 

[0:06:56] Simon Maple: Well, the first stat which kind of like jumps out straight away is the overwhelming adoption of AI. And of course, when we talk about selection bias and talk about bias of people taking an AI survey, you would expect that majority of people who are taking an AI survey already have an interest in it. 

[0:07:14] Guy Podjarny: It's already a topic they – 

[0:07:15] Simon Maple: Absolutely. Yeah. 96% of teams are using AI coding generation tools, coding assistants. And this is the interesting one. Over half of those teams use them most or all of the time. I think it's really important to kind of like recognize not just that people are using it, but they're already being very well embedded into their organizations and well embedded into their teams. 

And I feel that this is a slight – a very subtle but important difference between a team assessing a tool or a product to use and actually recognizing the value in it straight away and wanting to jump in wholeheartedly with their team in doing so.

[0:07:53] Guy Podjarny: I thought it was an amazing stat. And I'm sure, the selection bias immediately jumped to mind when I saw it. Probably that. But still, the consistent – all the stats you see about levels of engagement with the co-pilots of the world are quite staggering. 

I do say just to sort of add a drop of cynicism to it, which is the nature of these products is you install them once and then they themselves just show up every time you type a character. And so, I think there's like a little bit of a question mark, right? If I gave you a thousand suggestions and you took one, does that make you an engaged user or not? And probably got a thousand suggestions over the day if you're typing and coding in your IDE and you have a copilot installed. 

And so, not to say like I'm generally a believer that it is naturally a high engagement, high productivity tools. We'll talk about satisfaction in a sec. But I just sort of felt like I always – it is the nature of the installation that just puts them constantly in the line of fire in a very low-friction manner. 

And in fact, that was one of the beauties maybe of the idea of code completion, as in text completion, is that you can be wrong nine times out of 10, 99 times out of 100 and still be valuable. Because you're just really not in the way. 

[0:09:05] Simon Maple: I mean, it's an extremely sticky kind of like user experience in that sense whereby it's always in the – it's always at your fingertips. Although that can have a positive and a negative effect. And I've noticed a few times, when I'm just tabbing and trying to get my cursor to the right space, all of a sudden, I'm accepting – if it's fast enough, I'm accepting something which could be many, many lines of code. And I'm having then to delete those to actually just get back. 

While it's always on, sometimes I could imagine people wanting to turn off when they are absolutely not wanting to use it, just wanting to turn off and only use it for things that they want to use it for. But yeah, I hear you in that sense. It's one of those that's always – 

[0:09:38] Guy Podjarny: Some people are probably literally sort of changing the way they develop and they start writing comments in places and just having compilers sort of autocomplete them. I don't know if those statistics. I haven't come across. Anyway, we didn't ask about that. We just sort of asked – 

[0:09:51] Guy Podjarny: Yeah. Absolutely. 

[0:09:53] Guy Podjarny: Yeah. What about satisfaction? 

[0:09:55] Simon Maple: Satisfaction. Yeah. I mean, almost 72% of respondents did say that these AI coding tools was actually making them and their teams somewhat or much more productive. and I think that's the core piece, right? And that's I think the piece that people tend not to really – or rather, it's something that people assume is a given when they're using. We tend not to need to ask the question too much of is this actually helping you. Because there's such a pull from developers. 

Although that said, I have spoken to a number of different developers. And I think this – I wouldn't say the seniority, but certainly the experience of the developer really will have an impact on whether they would find it of great value or maybe actually make them less productive or around the same just based on do they know better than Copilot in terms of what they want and how they would do things. Are they actually fixing things at a higher rate? Because Copilot is introducing those issues more that they would do it if they were writing the code themselves? Yeah, it's interesting I think. While it's overwhelmingly 72%, I think if it was to break that down a little bit, we'd get different types of answers from different people.

[0:10:57] Guy Podjarny: Yeah. And maybe a thing to refine I guess in sort of questions in the future on it. It is though – I mean, probably one with the security hat on, one takeaway of it is you won't be able to block it. If 72%, even if it's like 50% of people feel like it's making them somewhat or much more productive, it's here to stay. 

And I think most people have already internalized that, which is the notion of, "No. No. Don't use that. That ship has long sailed." It almost never succeeds with sort of modern technologies. But this one is already pretty well entrenched.

[0:11:30] Simon Maple: Actually, while we're on this topic, let's jump ahead actually to another question, which I find – which I think we all knew the answer to this, but perhaps not to this scale. That question was how often do developers in your organization bypass security policies in order to use AI code completion tools? 

And when people say you're not allowed to use a specific code completion tool or you're not allowed to use them at all, we always heard stories about someone saying, "Yeah, I'll jump off the VPN. Or I'll do it on my private account and use ChatGPT or something else." Almost 80%, 79.9% of developers responded with that people in their teams bypass security policies either all the time, most of the time or some of the time. 

Around 20% either never or rarely broke policies. But the vast majority, four and five people some of the time, or higher, broke or bypassed security policies to use AI. Is that shocking to you, guys? I mean, is that something that you would expect? 

[0:12:30] Guy Podjarny: I think that's really impressive. I mean, that is really the show of conviction maybe or sort of passionate about it. I often sort of use the sort of equation. For everything in life, there's sort of how much you care and there's how hard it is. And you need to care more than it is hard to do it. Whether it's going on a diet or whether it's investing and securing your code. You can make things easy or you can kind of increase the level of care if you want to get people to do something and vice-versa if you want to stop it. 

I think in this sense, I guess my guess would be that they care a whole lot. But I don't think they care necessarily because they are so keen on improving their productivity and they're – I think that's an element of it. But I think there's just a ton of hype around it. 

I think there was a very, very strong sense of FOMO, like fear of missing out of not being able to develop. I mean, I'm old enough to have started developing before the internet. And at the time, you develop and also like the stuff that was in the military. And so, we weren't – the computers weren't connected to the internet. And literally, you had to go to a closed room at the end of the corridor to connect to the internet and develop and we're developing software in the surrounding. It never really – weren't a part of that sort of initial trend. 

And everybody that got out of the army and went on to developing normal surroundings and ends up using the internet. How are you able to know – how is that even possible? How are you not like a tenth as productive as you could be if you were connected to the very, very nascent internet at the beginning of the 2000s? 

And I feel like it's a little bit like that today, is I feel like there's such passion and there's so much glamour around the sort of new reality of software development. People hype not because they feel that about the value of it. And so, I think a lot of people are just not willing to be left behind.

[0:14:17] Simon Maple: Yeah. And it's interesting I think that organizations that do have strict policies. There are a couple of things I think that are important here. One large one potentially is IP, right? And not allowing not allowing your own IP, sensitive IP to leave your boundary. 

Another one of course is vulnerabilities being created. Hallucinations coming into your code and things like that. One of the questions that we asked was – we asked, first of all, how many other respondents contribute to open-source code? And just under three-quarters of them contribute to open-source libraries and packages. 

Interestingly, 83% of those three quarters – 83% of those three-quarters use AI tooling in those third-party libraries. Everyone uses third-party libraries these days. And it's interesting that for companies that even say, "Hey, we're not going to use AI tooling or rather AI code assistance in our first party code," they're ultimately pulling in code that will have been written by AI generative assistance through third-party code anyway. 

A lot of the potential hallucinations, potential security issues that are going to come through this code, it's going to hit your production environment at some point. It's just through other means. But I feel like IP is probably the bigger reason as to why people want to add these kinds of policies into their org certainly initially. 

[0:15:33] Guy Podjarny: I think that's a really good insight, which is to me a lot of it is about delegation of trust. And so, you are – there's the internal. I think also oftentimes people say AI security talk about security policy. I suspect a lot of these 80% that says that they bypass security policies at least somewhat to use AI, the security policy they refer to is a data security enterprise security type policy, which is I wasn't supposed to be using this tool because of IP concerns. 

And then maybe a subset of them refer to the fact that they were using AI and they didn't bet the code or something like that. And the security policy said that they should have. It's probably more the former. I think it does show the drive to use AI. And so, I highly doubt that someone that would circumvent data security kind of enterprise control to use AI would say I'm not going to use it because I'm not sure if the code it generates is trustworthy.

I think there's a lot of like demand and push. And you have to work through that. But I think when we talk about sort of the open-source world, it's interesting how open-source can – we delegate. As organizations, when we use an open source component, we delegate our trust to that maintainer. 

And for instance, we trust that that maintainer would choose other maintainers or other libraries that are also secure because we're effectively chaining that trust to those individuals that would create these sometimes monstrous dependency trees, right? That [inaudible 0:16:49]. 

I think we all know that there isn't sufficient care in those. There's excessive trust that is being given by one, say, maintainer. It's true for developers in the enterprise as well. But if we talk about the open-source world, by one maintainer in the dependencies, they turn consume. 

And so, it's interesting now to sort of think with the generative AI and have it kind of occurred to me is that there's now another entity that we're delegating trust to. We're now trusting that open-source maintainer to also choose the right open-source tool. Granted most of them probably using kind of the dominant ones. And to be reviewing and assessing their code. 

Yeah, I think that's a fairly significant new weak link added to our chain. I don't think the scrutiny that open source maintainers apply to the code that they use when they – gen AI is any higher than the scrutiny that they give to the libraries that they choose to produce. 

[0:17:44] Simon Maple: Yeah. And that's also shown in another stat that came through this report, which is only a quarter of the folks that contributing to open-source projects used any type of SCA tool to validate essentially and verify that the code suggest suggestions – or rather the dependency style suggestions coming from those AI tools were valid with the right types of projects and packages to use as well. 

Yeah, that's testament to what you were saying there about the level that we can kind of not expect necessarily but have become familiar with from an open-source dependency graph and tree. 

[0:18:20] Guy Podjarny: What does the data imply in terms of – like are developers not doing that verification because they think the code that is being generated is just secure? There's no need for that? Is it by omission or is it from like some conviction?

[0:18:32] Simon Maple: No. I don't think it's – I mean, it's interesting actually. Because when we think about trust and when we think about what developers believe is coming out of these tools, first of all, one step that came out is over half of those surveyed say that insecure AI suggestions are common. 

In terms of the sentiment that developers think that there's some kind of security innate and built in to AI suggestions, that's not there. And that's rightly so because I guess people will recognize whether they're using tools for their own jobs or whether they're using them for open-source. They're going to see issues pop up based on the AI coding suggestions that are coming out. 

I don't think it's necessarily that. My guess would be that those people who are developing, whether it's for their company or for an open-source environment, will use the tools or adhere to the specific policies that they need to for those different types of organizations. 

If they have to apply or adhere to a policy for their organization, they will do that and they will have stronger pipelines with stronger levels of policing and tooling that can be run in those pipelines. But that just simply doesn't exist for the majority of open-source projects and packages. And as a result, it's just a bypass. It's just things go through. Code just gets through without necessarily the levels of testing that we would expect from a commercial product.

[0:19:55] Guy Podjarny: Yeah. I find that to be – as we think about sort of all sorts of risks really with the generative AI and coding existence, which is a high-level. I'm excited by the productivity boost on it and sort of acceleration of innovation. One of the things that I worry about the most is how much are we sort of accelerating without a seatbelt. And I think that's very much I think the – like in a very meta, we can talk about the specifics. We can talk about what people do. 

But spotting a security mistake when you're glancing at code is quite hard. The process of writing the code isn't really just writing the code. It's thinking through what's the right way to implement it and then writing the code. Versus just looking at the code and says, "Yeah, that looks right." And continuing. 

But security – we all know that security is invisible. It's much harder to spot the omission of a check. It's much harder to do that. And then even beyond any of the activity, there's more reasons for why it's insecure. But even beyond how good it is, if you're producing twice as much, five times as much, 10 times as much code in a given period of time, then you're automating more, you're producing more, you're going to miss more. Even if you somehow magically miss half as much, but you produced 10 times as much code, you've created five times as many vulnerabilities. And so, I think that to me is the concern, right? Is like how many accidents do we need before we do this a requirement for a seatbelt? 

[0:21:17] Simon Maple: Absolutely. And there are two areas here, right? There's quality and there's quantity. And we know or from all evidence that is being released from many different organizations and teams, the efficiency, the effectiveness of development teams increases. We know the quantity goes up. Then the question is, "Okay, is the AI tooling producing the same? Higher? Or lower quality than what developers are providing?" 

And actually, there's a second piece on that, which is do developers believe that the quality is higher or lower than if they were to write the code? Because continuing your analysis there of the seatbelt, one phrase that I love is you can only go fast in a car if it has brakes. I love that phrase. 

And I think there are two pieces there. One is the fact that you would have breaks. The analogy here would be that you can only go fast in a pipeline if you have the right checks that can stop a pipeline at the right time or inform us that this is a bad idea to push forward with this pipeline. 

The question is, well, what checks do we have? And secondly, do we feel like we need those checks or not? And there's a there's a real interesting sentiment question that came out, which is asking whether the respondents think that the AI code that is generated is more secure, less secure or as secure as the code that they would write. 

Now, three-quarters of respondents think that AI-generated code is more secure than human code. And this worries me in a couple of aspects. 

[0:22:37] Guy Podjarny: I wonder how many of them would have said that it's more secure than their code versus human code at large. I write better code. But human code is not as secure as AI code.

[0:22:49] Simon Maple: This is worrying, because it is reduced from a perception point of view. A developer will think, "Well, this is better than the code humans or I would write." As a result, the need for breaks. The need for tests has reduced because they put too much test. They over-rely on the code that is being generated and put too much belief that the code that is magically appearing. And it is magic to a lot of people, right? When they see that code that is being generated, it's so mind-blowing. You put too much trust. You over-believe that the thing that's magically appeared has got to be right. Because so much else has happened to get that code there, which is so nine times out of 10 or whatever very accurate and I'm working to do what I want it to do. And we over-believe in it. 

It also goes against a ton of studies. And the vast majority of studies will actually show levels that I think 40% of code created by Copilot has security vulnerabilities. Many show that an increase in security vulnerabilities introduced by these tools compared to code that has been written by humans. And actually, this similar belief that developers overtrust in generated code has actually also been seen across a number of different studies as well. And these studies at NYU, Stanford and many others. 

[0:24:01] Guy Podjarny: Yeah. I think it's super interesting. I guess one aspect of it is the – this is maybe what we talked about, sentiment versus reality. People put more faith in the code than what studies demonstrate they show. And it's interesting, because even the very players creating the code, the GitHubs of the world, they say that the code – you're like hacking or something. Promoting tools around it to help secure those components. 

And even like logically, it's like you stop it and say these tools have been trained on code that is insecure. This kind of know by. Some analysis of it would imply that it can produce vulnerabilities. But maybe actually the more interesting thing is that even within the context of the sentiment is three-quarters of them think that it is more secure than humans. And yet, over half of them think that insecure suggestions are common. 

[0:24:48] Simon Maple: Actually, it goes further than that as well. Almost nine in 10. There's definitely this kind of like two sides to this. Because I think people almost like have this imposter syndrome of themselves as developers. And so, they always assume, "Oh, yeah. This generated code must be better than me because – well, it must be more secure than me because I'm not as smart as all these other amazing developers out there." 

However, almost nine in 10, 87% are concerned about AI security. And of the list of things that they could be concerned about, the highest two here are AI security came in first. The idea of security concerns, vulnerabilities and so forth going in. And the second highest was data privacy. Effectively, I guess there are a couple of things that could be interpreted there as data privacy. 

There are two sides of this where people are very concerned about the quality of what is produced by these tools. But they themselves almost think “It's okay. It's got to be better than the code I write”. Yeah.

[0:25:42] Guy Podjarny: I mean, I think that there might be like a self-esteem element to it. It's interesting to maybe do some psychological analysis of what's the average sort of – whatever. Sort of level of self-esteem by sort of the typical developer. I would have guessed like reasonably high. 

But it's also I think a demonstration a little bit of sort of hype and the excitement on it. They're worried about it because they know they're going fast and they see there's no guardrail, right? They see that they might sort of fall off on it. But it is just so compelling that they're maybe telling themselves, "No. It's okay. Sort of the code is more secure." 

And it isn't just verbally. It isn't fully in contrast, right? Something can be a little bit more secure than human code, which implies it still has a fair bit of vulnerabilities. Therefore, it is concerning that it would produce vulnerabilities. But the more sort of we strengthen or one believes in the fact that this code is probably better than human, they kind of get indeed to that point of, "Am I the right kind of a guard? Am I the right person to evaluate whether this code is secure or not?" Which I guess if you sort of go full circle over here, it comes back to just this kind of increased need for automated controls. 

There's something very sort of simple about this, right? Which is the more automated the code creation, the more automated the security controls need to be. Fairly simple statement. The more autonomy – and it isn't autonomy. There's some verification. But still, the more AI does for us, the more we need tools, possibly AI-powered, to assess whether what was created is correct. Including whether it is secure. 

[0:27:14] Simon Maple: Absolutely. And I think this has always been a race from the early days of DevOps all the way through to today, whereby the speed at which we can deliver, the speed at which we can write code has to always be less the – or slower than the speed at which we can test, the speed at which we can validate and verify that our code doesn't have – whether it's performance issues. Coding issues. Security issues that are outside of the policy or the quality levels that we need to push that to production. 

And I think it's interesting to look at the data as to where we are today in terms of that automation. One of the questions we asked was about what percentage of people's estate effectively had security scanning automated? And it's interesting that – well, a couple stats here. Only one in 10 have automated 75% of their security scans overall. 

And actually, the majority of people actually have less than half of their security scanning fully automated. I think today we're probably doing a far better job than we were even a small number of years ago. Three to five years ago. I don't think we were – we set ourselves up for this – for such a large improvement in the ability for us to deliver code and the speed at which we can write code. 

And this auto-generated AI-assisted coding has enabled anyone to build code and enabled everyone to write code fast. That's what's quite scary. I think we need to see how security programs react to this. But you're absolutely right. Whether it's human-written code or AI-written code, we actually are – largely, when we talk about the controls that are in the pipeline, we're looking at very similar threats in a large number of cases, right? Vulnerabilities come in through code being written down. Through libraries that are being added into our code. 

We already have the right tooling. A lot of the right processes and things that we can catch these early on and flag these up. We're not necessarily in any way caring whether it's machine-generated or human-generated.

[0:29:18] Guy Podjarny: Yeah. It's really interesting. There's basically I think two camps or sort of maybe. And most people are probably not firmly in one or the other. But sort of two views that you might embrace. One is one that just sort of puts full faith in the AI code. In AI as a whole. And so, it's like, "Look, you'll produce things that are better than human. We should trust it." 

In that case, you might lean more towards less dependence on the human automating it. But even in that case, you should get some assurances. Because the AI code generators, they're not giving you any assurances around security, or functionality, or any of that. 

And so, I think even in that case you should have automation or sort of drive to assess it. And there's actually like a very good case for also for separation of duties. A lot of security people have been raising this, which is even if somehow some tool on it had the code generation capabilities and the security assessment of it, do I really want someone – like these tools to audit themselves, right? Or do I want separation of duties and I want a different – and the world will probably figure this out. 

But I think there's kind of that philosophy, which is I want to put more faith into the AI. And therefore, even in that case, I think it makes the case for sort of automated security testing maybe AI-driven. And there's the other philosophy that's like a lot less trusting in AI and really leans into sort of verification. Let's walk maybe a little bit more slowly. And I think that path also leans like into verification. But it might lean more into human verification. 

But in that sense, you're kind of crippling the speed benefit that you might get from the AI. And so, if you level up, you still kind of find that path to any extent that you want to get an acceleration from AI still biases you into automating security control. I mean, no matter how you spin it, if you're going to run faster, you need your security controls to run faster. I loved how you phrased it, which is you need to aspire that the pace of software creation is slower than the pace of your ability to verify it. 

I think we asked some questions as well indeed about that, which is are people maybe acknowledging that growing need? And now that they're adopting gen AI as well, also adopting or like leaning more heavily into introducing automated security assessments into the pipelines? 

[0:31:28] Simon Maple: Yeah. Absolutely. We asked a number of questions kind of really around the habits, the change of habits that people are making or even the changing aspects of the security programs that people are looking to introduce as a result of the generative AI tooling that is also being introduced into the organization. 

There are a few interesting stats here talking about that speed of being able to react and the speed of being able to keep up. One of the interesting ones was almost six out of 10 AppSec teams are struggling to keep up with the pace at which AI generative tooling is making the change speeding up. 

What it shows is one in five is significantly struggling. About almost 40% struggling moderately. And just over a third are coping well. The vast majority, there are over 50% really struggling there to keep up with that pace. And I think as a testament what we were just talking about really about a lot of these teams were possibly even struggling slightly before anyway. But this is just taking that extra level in such a short amount of time that we aren't able to increase that speed sufficiently to be able to contain that extra work. 

[0:32:35] Guy Podjarny: And I feel like some aspect of this is probably true for any new technology adoptions, right? If you ask this about container security, if you ask this about cloud security, you'd probably hear a lot of security teams pulling back. Maybe it's just probably one of the challenges is combining this with the adoption rates that we heard before. 

Probably, I would kind of guess that cloud security adoption in smaller organization, we're sort of seeing like reasonably similar rates. Within the larger organization, it was just straight up not allowed. versus Copilot, which is like a desktop tool. You can install it so the blocker or the friction, technical friction was a lot lower. 

And so, probably, even if the appetite or the pull to embrace the cloud, I don't think it was as strong as AI. But it was substantial the ability of a single developer to just sort of choose to use the cloud wasn't the same. 

I think here, this is similar to like Snyk's kind of core ethos, right? The reason we succeeded was because we made it very, very easy for an individual developer to just pick up a security tool and start using it. And while other sort of tools might have required like a platform-level decision. 

I think cloud versus AI might be a little bit like that. Cloud requires some platform-level decision. That held it back. With AI, that friction is gone. If it is a developer with some rights to install a thing with their desktop, which is true for many, many organizations, including many enterprises, they just adopt it. I guess in so many things, AI is like a super-sized version of previous challenges we've had in DevOps or in technology adoption. And I think this is yet another one. 

Are they doing – these security teams, are they leaning into security automation as the solution versus chasing people around? 

[0:34:22] Simon Maple: Yeah. No. I think so. Because if we look at the responses to how organizations have changed their security practices as a result of the completion, code completion tooling, the most popular answer back was more frequent security scans. 

And so, that itself leads into being able to automate it. Being able to put that into either a pipeline or something earlier in the Dev cycle to be able to say, "Okay. As I make changes, we test. As I push, we test." And so forth. More detailed code audits. Adding new tooling as well was an interesting one. People definitely – I think turning that concern of security issues coming in through AI into these activities. That's positive to see, I think. 

[0:35:04] Guy Podjarny: Yeah. It breaks the illusion, I think. It breaks. Before with DevOps there was this like, "No. No. No. I can keep up." I think I can figure it out. I can find a methodology. And I think then AI comes, you're just like, "Oh, bugger. I'm never going to catch up." I need to do something different and run it. Maybe it's good. Maybe it's like a sobering moment for organizations. 

We know we see it on the Snyk side. We see it on the Enterprise side, in which some organizations that are not necessarily allowing the use of it and they're saying we have quite a few customers that are saying you can use Copilot. We want the benefit. But you can only use it if you add Snyk next to it, right? If you have the right security guard rail next to it. 

Because I guess there's sort of the conscious of the risk. A lot of the AI decisions are not necessarily fully philosophically thought through. But they're just – you're saying, if you want to run faster, you really need a guard rail next to you. 

[0:36:00] Simon Maple: That's so good from an education, from an awareness point of view. Because a lot of people in a lot of organizations don't mandate certain things for certain projects or whatever. But by absolutely mandating a security tour next to a co-generative assistant, you're essentially almost implicitly saying, "By the way, there are security issues that can come from the usage of this tool." And that's why you need to obviously use this, Snyk or a tool like this.

[0:36:25] Guy Podjarny: I want to sort of raise like a devil's advocate in that comment. And my thought of it, I would love to hear yours as well, Simon. Which is AI will get better. It will get 100,000 times better. A million times better. 

Jensen, the CEO of Nvidia, was quoted. I think I forget if it was in five or 10 years, he thinks models would be a million times better. I don't know exactly how we define better. But I agree that this is a technology as powerful as it is in its infancy. And so, things will get better. 

And so, we think a fair bit about, "Okay, would it get to the point in which AI tools will be good enough that they will produce code that is good enough?" I get asked that. For a bit, it's fine. For now, they have these bugs. Would it do it? 

And I guess I have a couple of views on it. I would love to know if they sort of hold water with you. One is maybe a bit more indeed the sort of separation of duty maybe a bit more almost like administrative or compliance-oriented, which is – and we see it a lot. And it's very important. It's a very important concept in security, which is, yes, you want security to be built-in. But you need audit and verification to be done by someone who is separated. 

That's also kind of strongly important when you think about if you're going to use multiple models, if you're going to change models over time. All these things really kind of bias you in favor of saying, "I don't want my AI generation tool to be the same thing that does security." 

But then I think more philosophically, as I envision it, AI produces a lot of results. And that's amazing. It's going to do it more and more and more and more. But because it's a machine that's generating it, we really have a lot less control around what the logic that sort of ends up driving the generation. 

I would posit that we actually need to increasingly rely on verification. The more we can trust – it's the same maybe analogy we talked about the guardrail. But the more ammunition we have, the more support we have to be able to verify whether AI has generated the right things for us, the Gen AI tool, the more we're able to use it. More enabled. But we need those verification elements in any aspect of AI, specifically in code generation. 

And so, maybe the tools need to look a little bit different. Maybe things would evolve. I would think that even above and beyond the code completion and all of that, the more it gets generated by AI, the more the need for a verification system is there. 

[0:38:37] Simon Maple: Yeah. No. I would agree. And I think boundaries are important in terms of being able to say that this needs to – this can't wholly happen within a single boundary. We need maybe an AI tool here to create. And particularly within huge amounts of context, that's fine. Let's create. I'm sure in years it'll be able to create larger amounts of code and more complex architectures and things like that. Great. 

But to then have that same tool with all that same context then try and understand where its weaknesses are. Is that putting too much control within the same boundary? I think, certainly, I would argue yes. Particularly at the speed at which this is advancing, we need to make sure that we have the control and we have that validation. 

Yeah, I agree. And I think there is more than just a quality assurance aspect to this. And it's more about kind of like really having that control aspect ourselves. It's a really interesting question. And I think it does definitely go beyond the quality questions and more into almost some of the ethical questions and things like that beyond. Yeah, I would agree.

[0:39:41] Guy Podjarny: Yeah, it's fascinating to think a little bit about sort of the future element to it. I think this is advancing at such a pace that we can't afford to only think about – like there is the tactics of code assistance. They're here now. And so, as a security team, you need to decide what you do. Do you try to disallow them with all of the futility of that? Do you require a guardrail? Do you accept the risk and kind of let them run wild? What is it that you want to do? know I think that's kind of tactical. 

But you also need to just have some thesis on, over time, where does it go? And I'm mindful of the bias and the Snyk lens on it. But the more – even kind of given that, I sort of think about AI and I think about generation. I think it's going to strengthen the need of a security team, a platform team, any team that is trying to be able to like attest to their customers that the stuff we're creating is trustworthy. They're going to need verification tools that are outside the models. 

I've got one final question for you. How fast can a papaya run with and without a guardrail of the – 

[0:40:37] Simon Maple: Well, it depends how fast you throw it down a mountain, I would say is – 

[0:40:41] Guy Podjarny: I thought you would say is it an African papaya? 

[0:40:47] Simon Maple: Yeah. Yeah. Of course. Yeah. Yeah.

[0:40:49] Guy Podjarny: If you could outsource any portion of the creation of these reports to AI, what would that be? 

[0:40:55] Simon Maple: Oh, my goodness. Almost every part of the report – any report creation. 

[0:41:00] Guy Podjarny: Including the analysis you and I just did. 

[0:41:02] Simon Maple: Yeah, including this podcast. I think the data analysis is possibly the most interesting piece, right? Because being able to sift through huge amounts of data and actually pulling really interesting pivots to say, "Oh, this demographic actually thinks differently to this demographic on these certain aspects." And to pull what it thinks is interesting stats directly from that data, I think that's really interesting. Because then we can kind of almost fly through a list of what an AI engine thinks are interesting outcomes of a survey. And then we can just – well, we could even further just ask AI to write up those interesting – 

[0:41:33] Guy Podjarny: Yeah. Compelling. And then we just sort of hope it hasn't become sentient and it's trying to – 

[0:41:38] Simon Maple: That's true. Oh, my goodness. There's the boundary. There's the boundary that's already been crossed. Yeah. 

[0:41:44] Guy Podjarny: You need a proofreader maybe somewhere.

[0:41:45] Simon Maple: Yes. 

[0:41:46] Guy Podjarny: Maybe a human statistician look at the data. 

[0:41:49] Simon Maple: Yeah. Or maybe next time we'll just ask a bunch of AI coding assistants the question and they can provide their own opinions on – 

[0:41:56] Guy Podjarny: We'll do it. This would be in next year's AI security. We'll include some AI Copilots in our survey data and ask them what do they answer.

[0:42:05] Simon Maple: Yeah. 

[0:42:06] Guy Podjarny: Thanks, Simon, for the great review of the report. For everybody listening, I do think it's a really interesting. It's not a terribly long report on it. It is linked in the show notes. I would highly recommend that you check it out. And as we do expect that we will do more of these reports, we would love to hear back from you to sort of understand what you would want to see in these reports. Any criticism. Any feedback. Of course, the things that you want to sort of see us continue doing. 

I think that is it for today. Thanks again, Simon. And thanks for everybody tuning in. And I hope you join us for the next one.


[0:42:40] ANNOUNCER: thank you for listening to The Secure Developer. You will find other episodes and full transcripts on devseccon.com. We hope you enjoyed the episode. And don't forget to leave us a review on Apple iTunes or Spotify and share the episode with others who may enjoy it and gain value from it. 

If you would like to recommend a guest, or topic, or share some feedback, you can find us on Twitter at DevSecCon and LinkedIn at The Secure Developer. See you in the next episode.


Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales