Skip to main content
Episode 75

Season 5, Episode 75

DevSecOps Data With Alanna Brown, Gareth Rushgrove, And Alyssa Miller

Listen on Apple PodcastsListen on Spotify Podcasts

On The Secure Developer, we often hear a lot of opinions and experiences from people who are working in development, so today we’re turning to the data, to figure out what works and what doesn’t in the world of DevOps and SecDevOps. Joining us for a panel discussion on the topic is Alanna Brown, Senior Marketing Director at Puppet and mastermind behind the State of DevOps Report, Gareth Rushgrove, Product Director at Snyk and curator of Devops Weekly, and Alyssa Miller, Application Security Advocate, also at Synk. In this show, we get a lay of the land and take a look at the state of where things stand. In this section of the discussion, we hear about vulnerabilities and the mixed bag of data that our panelists have seen around remediation. While there are some positive developments in the space, there are also some areas, like on the container side, where there is great room for improvement. The conversation then moves to security practices and which security controls are effectively deployed and which are not. We gain great insights into the role that integration plays in the efficacy of controls. While it’s not all sunshine and roses, there are encouraging shifts happening around security thinking. From there, we move onto talking about infrastructure as code security and shared responsibility. Again, the panelists present their varied data findings, which paints an interesting picture. Finally, we wrap the show up with consolidating the discussion, where the panelists highlight what they think is key going forward. To hear more from this fascinating, data-rich discussion, tune in today!

Teilen

[00:00:51] Guy Podjarny: Hello, everyone. Thanks for tuning back in to The Secure Developer. I’m Guy Podjarny, and today we’re going to have a different format episode. In this show, you hear a lot of opinions and experiences from people trying to tackle security and development or DevSecOps, and I think those are very valuable. But in this specific episode, we’ll try to focus on data, and we have some great guests who dug into more broad data around what works and what doesn't work. We’ll kind of get a bit of an inkling to some of these sort of great data perspectives.

So we have three great guests. I’m going to call them out and let them introduce themselves. First, we have Alanna Brown who’s been spearheading the State of DevOps Report at Puppet. Alanna, thanks for tuning in. Can you tell us a couple words about yourself?

[00:01:40] Alanna Brown: Thank you so much, Guy, for having me on your podcast. I’m Alanna Brown. I work at Puppet. I started the State of DevOps Report back in 2012, if you can believe that. Back when there were no codified DevOps tool chains, there were no DevOps engineers. DevOps was certainly not a household brand or name at that time, and it’s been kind of amazing to head up this research for the past eight years and kind of see the growth of the DevOps space.

[00:02:11] Guy Podjarny: That’s awesome. Yeah, definitely become one of the key assets or sort of key data insights into the ecosystem. We also have Gareth Rushgrove from here at Snyk. Gareth, I’ll let you introduce yourself and the recent infrastructure as code work that you’ve done.

[00:02:26] Gareth Rushgrove: Yeah. Thanks, Guy. I’m Gareth Rushgrove. I’m currently one of the product directors at Snyk. I was actually previously a colleague of Alanna over at Puppet and being sort of around the DevOps sort of space for a while. I’m also the curator of DevOps Weekly, and like started 10 years ago. Most recently, I’ve been doing a bunch of work around infrastructure as code, so overlapping some of my interests from Puppet and Snyk, looking at like how we can better secure infrastructure via that infrastructure as code.

[00:02:59] Guy Podjarny: Thanks, Gareth. Last but very much not least, we have a guest that we’ve had the pleasure of having on the show not too long ago, talking about sort of the DevSecOps hub and practices. We have Alyssa Miller from Snyk as well. Alyssa, can you say a couple words about yourself and the State of Open Source Security?

[00:03:14] Alyssa Miller: Sure, yeah. My name is Alyssa Miller. I’m a application security advocate here at Snyk. So I was a developer for about a decade before I got into security. I’ve spent almost the next 15 years working in specifically application security space more than anything else. Recently, we completed the research and authoring of the State of Open Source Security Report which draws together a lot of different data sources from Snyk, from the open-source community, and even surveys that we conducted within the various developer security and operations communities.

[00:03:48] Guy Podjarny: Awesome. Again, thanks, everybody, for joining here as guests. Let’s kind of go through a bunch of questions or perspectives of security. Maybe we start with just the status of where things stand. How vulnerable are we in terms of looking at the security of the current ecosystem. Alyssa, maybe we’ll start with you a little bit. You've done some inspection of vulnerability state with the latest report. How do we fare? What’s the current status?

[00:04:18] Alyssa Miller: Yes. We looked at a lot of just overall growth of the ecosystems, first of all, and certainly open source continues to grow amazingly quickly at over 100% rate overall, just looking at the number of packages available. One of the stats we like to track just are the number of vulnerabilities are being identified across the various open source ecosystems. Well, certainly, of course, we saw new vulnerabilities this year like we would see every year. The rate at which we saw new vulnerabilities actually dropped a little bit. There weren't quite as many new vulnerabilities reported this year as we saw last year so that's – I mean, one year does not a trend make by any stretch of the imagination, but certainly something that we’re going to keep an eye on because, boy, that would be a really encouraging thing to see. Conversely, we expand beyond just open source packages. We did look at the container world as well, and just some of the top official images with Dr. Haben - little bit different story there. There were seen those official container images with Docker Hub and a little bit different story there. There we’re seeing those official container images. They still – Some particular outliers are really notable in terms of just number of vulnerabilities that exist in those official images out there. So kind of a mixed bag. I think we’re seeing – We’ll talk more about it I'm sure, but definitely security practices being more adopted, more of a DevSecOps culture being looked at. But I know Gareth and Alanna have a lot to add in terms of those particular subjects as well.

[00:05:56] Guy Podjarny: Definitely. We’d like some light of hope, especially on the container side. It sounds like we haven't quite made as much progress as possible. Gareth, Alanna, what have you sort of seen in this space? Does it match the data in your own reserves?

[00:06:12] Alanna Brown: Well, I’ll jump in and say that every year we do look at specific performance outcomes, anything relevant to this conversation. Last year, we looked at the time to remediate critical security vulnerabilities, and what we found is that the people who have integrated security at the highest level within their organizations, those teams are actually able to remediate their critical vulnerabilities much faster than any of the other cohorts.

We did look at specific bands and only 7% of our total respondents were actually able to remediate a critical vulnerability in less than one hour, which one hour is very short time to do so. The majority of the respondents, however, were able to remediate in less than a week. I think just seeing all these bands, I think the main take away for me is that it is actually really hard to reduce the time it takes to remediate vulnerabilities simply because there are just so many factors involved in that process. There are too many stakeholders, too many handoffs, too many approvals. So to do that on the backend is pretty hard.

Although the differences between each of those levels are statistically significant, they’re not as dramatic as we would typically like to see between those who have integrated security pretty deeply and consistently and those that haven’t. But still, that said, I think any reduction is a good thing in general because it still does help reduce your company’s risk and exposure.

[00:07:52] Gareth Rushgrove: Jump on the point Alyssa made about container images being sort of an area that like needs attention because partly container images are just a component of something, and most applications consist of multiple services or you’re running many things. A lot of the development effort is moved to some high-level tools, composing multiple images together, and that security sort of foundation isn’t there. Some of the research we did into Helm charts and looking at the Helm Stable Repository. There’s loads of stuff in there. When we looked, there were nearly 300 table charts. There’s more now. But 68% of them have – They’re like an image with a high severity vulnerability and roughly the same sort of amount of like [inaudible 00:08:42]. 65% of them had like outdated images. There’s this constant problem of keeping everything up to date and where you don't have a knock-on effect.

So, if your software is not up to date, your applications aren’t secure. If your packaging isn’t up to date today, you're insecure. If how you’re orchestrating those things together isn’t up to date, you're insecure. It’s multiples ways. You miss one, and it has a knockoff effect. I think that’s sort of interesting. We need to build on secure foundations in order to build secure applications.

[00:09:17] Guy Podjarny: It’s interesting. We’re sort of seeing a picture here. There’s the total number of vulnerabilities in the ecosystem which maybe we have some inkling of hope there with a reduction in the total. But then we have these complex systems that increasingly just have more opportunities to draw them in. You’re composing multiple containers, and so there's some multiplier effect of saying you might be slightly less likely to have a vulnerability in one of those but you’re going to have more of those components, so maybe combined you get to that sort of 68% stat.

I'm interested in just – I think there’s a little bit of a discrepancy in the different data perspectives. I know, Alanna, you described when people are asked, I think it’s survey-based, right? They are asked how quickly do you address a vulnerability, at least there is an echelon there, sort of a top echelon that is doing this in one hour, the others in sort of a day or in a week. Alyssa, I think that doesn't quite match some of the data that you’ve seen.

[00:10:16] Alyssa Miller: Yeah, and that’s true. That’s what's really kind of interesting. Within the State of Open Source Security Report, we had the opportunity to actually ask the question as a survey question about vulnerability and remediation timelines. But then because we have access to aggregated data from the scans that people are performing on their software, we’re actually able to look at what are some of those remediation timelines in reality. In our survey, when we asked about vulnerability or remediation, we saw 26% that said they would expect it in a day or less. Well over 65% said definitely in a week or less.

But when we go and we look at the actual data of what these organizations are achieving in terms of remediation, we see less. Just about 1% that are actually fixing vulnerabilities in the same day, and these are vulnerabilities that have fixes available. 34% took 20 days or less. But what's alarming then is the remaining 65% that are 20 days or longer, and 36% of those are 70 days or longer. We saw kind of the mean was about 62 days to fix a vulnerability. The max that we saw was in lengths of years, and so it's interesting to see that disconnect, and it’s one of the things we drew out in that report is a lot of people there's a lot of expectation and a lot of belief that we’re better at fixing vulnerabilities than we really are when we look at some of the data that's available.

So it’s – I almost hate to say it because it feels like this little 'gotcha' scenario we pulled out there, but it was really – It’s an interesting story when you look at that and you can see how expectations don't always jive with reality and, indeed, one of the fascinating pieces that we’re just starting to dig into now is, okay, who answered how as far as the roles within the organization? You’ll definitely some upcoming updates from us on that as well. Just the hypothesis may be being is it people at higher levels of management who expect these short time frames. Then the reality is when you’re down at a core developer level, maybe the expectations are different. Definitely, it’s something we’re still digging into. We want to see what the data tells us there.

[00:12:43] Gareth Rushgrove: We all want to rapidly speculate on this. It felt like it’s a data show, so we need to be like we’re sticking to data. All of us wanted to go like, “Oh, I think it’s this and this and this.” I can tell.

[00:12:54] Alanna Brown: Well, I will say, and this is data based, people do you tend to have a rosier view of what's going on when they take a survey, especially executives. So, I think it's wise that you would segment by level, because the higher up in the org, the tendency is to kind of paint a rosier picture of everything. I think that’s fascinating. The other thing that I think is slightly nuanced is that we did ask about critical vulnerabilities versus all vulnerabilities, and so I don’t know if you’re able to split out your data based on the severity.

[00:13:28] Alyssa Miller: That’s unfortunately for the report we didn’t, but that is another piece. That's one of the great things about the way we work with this data is it’s kind of evergreen. We can keep dissecting it and pulling it apart. I don't mean to speculate but obviously part of any study you do is study on hypothesis and then seeing does the data actually match that because it guides your data analysis, and that’s what we’re trying to do is it's like is that maybe the case or is there something else at play. That’s a very good question too that we do want to dig further into is understanding how do those remediation timelines shift if I'm talking about something that's critical, high CBSS score versus something that’s very low severity and perhaps maybe those are being left to take longer the way that you would hope that things are? We prioritize those more critical vulnerabilities.

[00:14:20] Guy Podjarny: Yeah, for sure. Well, I think data is hard, and there are a lot of nuances, and it's hard to complicate. But it’s interesting. A couple of interesting theories which, as you pointed out, you’re right that it’s easy to theorize. Then when you do collect the data to do it around the difference between survey and reality between seniority and definitely between how do we approach different severities and what are those different timelines.

Maybe let’s sort of shift. This is a little bit about where we are and maybe how quickly we respond to these vulnerabilities. Maybe let’s talk a little bit about the security practices and different security controls, which ones are deployed and which ones are effective or not. Alanna, you had some great data about sort of these practices and their effectiveness in the State of DevOps. Do you want to share a bit more?

[00:15:08] Alanna Brown: Yeah, absolutely. We always try to take a very pragmatic approach with the State of DevOps Report, and we’re really interested in seeing and understanding what practices are actually helping to move the needle. One of the questions we asked was do your org security processes and policies actually improve your security posture? Because we’re trying to get at this idea of like is this all just security theater or do you actually have confidence in your security practices?

Security posture I just want to say is really hard to measure directly for a firm, and so you have to measure it indirectly. We did that by asking about the sentiment around security and confidence in security posture. But if anyone knows of any studies around security posture, please let me know because I would love to see what kinds of measures people use.

But I think this is probably one of my favorite findings in the whole report last year. But we found that at the highest level of security integration, teams are actually more than twice as confident in their security posture, and so, 81% of the respondents that were at high integration, they had fully integrated security into their software delivery lifecycle, they felt that their security policies and practices improved their security posture. Then at firms with no security integration where it’s just completely ad hoc, just 38% had that same level of confidence.

Now, I did mention this earlier but it doesn't mean that they are more secure. But I think what’s important to note here is there is a shift in mindset that has happened, and the mindset that security considerations are not separate from software design and software creation, and that it's not just the sole responsibility of one team. We see that movement as people start to integrate security more deeply.

We looked at a bunch of practices and we ranked all those different security practices by their effect on security posture. It was so fascinating because when we stack ranked them, the top five were things like security and development teams collaborate on threat models. Alyssa, I know that’s going to make you happy. Security tools are integrated into the development pipeline, so developers can fix issues immediately. Guy, I know you are pretty passionate about this yourself. Security requirements both the functional and nonfunctional ones are prioritized as part of a product backlog. Infrastructure-related security policies are reviewed before deployment, and security experts are evaluating automated tests, so they’re not doing the tests themselves. They’re not manually performing those tests. They’re evaluating automated tests and they’re really only using that manual function to review changes in some of the high-risk areas of the code.

What I think – What is stunning to me actually is that all of these practices rely on deep collaboration across these functional boundaries and they happen really early in the development lifecycle. Technology, of course, is helpful. But ultimately, it should facilitate that level of collaboration.

[00:18:37] Guy Podjarny: Yeah, absolutely. I think, first of all, it’s great to hear a validation to what we've been preaching that is indeed data-driven. But collaboration is kind of hard I know. Alyssa, what did we see, what does the State of Open Source Security show in terms of what types of practices are embraced on kind of that audience?

[00:19:00] Alyssa Miller: Yes. I mean, it was kind of impressive to see. In fact, honestly, talking with these organizations in the survey, over 57% that said they've got some form of source code analysis in place or doing some type of static analysis. That was really impressive. Indeed too, when we look at how many of those have integrations into – Or automations rather into their security pipelines or their delivery pipelines. Almost 50% said that they've got some type of static analysis and even software composition analysis both being integrated into those pipelines which, of course, when we start talking about developer enablement and how we’re trying to get there and create this frictionless environment where developers can quickly and easily respond with short feedback loops to vulnerabilities that have been introduced, getting that type of automation in the pipeline that early on is a crucial element. So, we saw a lot of that.

I mean, on the – I guess the less encouraging side was we did ask about some very specific practices trying to drive that sort of culture of shared responsibility, and we are hoping to see more collaboration, but these were very specific practices, things like having security champions programs. 15% of organizations were doing that. Also setting up just daily integrations or weekly integrations or functional integrations between the various areas, security developers and ops in terms of like do your security people go to your standups and so forth. They’re, again, only 16%, so there’s definitely more room to continue create that collaboration but hearing that like key security practices like Alanna you mentioned threat modeling. You know my love for threat modeling, and this is why because it does create that collaboration between the business and the development teams and the security teams and the operational teams, all coming together to really talk about this system and how to make it secure. So, it is promising to hear you say that there’s growth in that space, and that organizations seem to see the value in terms of their confidence being boosted when they're doing those types of activities.

[00:21:20] Alanna Brown: Well, just to kind of finish that thought also, Alyssa, we looked at all the practices that you mentioned like static code analysis, dependency checkers, pen testing, domain application tests, all of that stuff too. While they’re very high frequencies, so people are using them, they had less of an effect on security posture than the actual lower frequency practices like threat modeling that really do require a lot of collaboration.

[00:21:51] Guy Podjarny: I guess we’re, again, slightly comparing different sticks or sort of measuring sticks to those elements in the sense of we’re sort of measuring it based on kind of perceived security posture, right, which is probably decent, sort of as good a barometer as any and instead of a murky measurement of the security world. But it's interesting to also consider whether it’s how much people get desensitized basically once a security practice becomes a part of the core of how they built software, how much they no longer learn to appreciate its contribution to their security posture.

[00:22:24] Gareth Rushgrove: Some of the things that came up with the survey that we did around sort of I guess application of infrastructure as code tooling, it’s so interesting to fully implement it because I guess in that space it’s eternally for lots of organizations adopting them. But it’s also early for them considering the security implications of what they’re doing there.

One of the things that came out of the one we did that was more than 50% of respondents like rely on manual code reviews like sort of late stage of penetration testing and auditing. They’re like, “Yup.” Basically, it’s all manual or it’s very much batch in later, and it’s interesting that obviously like you were saying about, like the benefits of shifting things to earlier in the process that the part that actually I found slightly more frightening on top of that was I guess that makes sense in the context of its early adopting, and 75% of respondents said they were somewhat or not confident at all in spotting issues by manual reviews. Even accounting for those being separate.

A large chunk of people, they’re like, “Yup, we’re doing good reviews.” “Nope, we’re not at all confident in finding anything.” It’s very – That sort of was interesting I felt in this light of like sort of thinking about practices, thinking about evolving practices for like the newer parts of the stack that people are adopting. But they're obviously sort of copying, they're paraphrasing, they're parroting things that lived there before, it’d be interesting to see how that evolves over the next few years.

[00:23:59] Guy Podjarny: Yeah, it’s fascinating. It’s a new practice, but then people just revert to a methodology that seems right, but they haven’t actually built any level of confidence in that methodology being the right element.

There’s actually an interesting data bit from a previous guest, Larry Maccherone from Comcast, and they have actually like that’s a whole data episode on its own, and he’s taken a very data-driven approach to driving kind of security transformation within Comcast. They measure security or he measures security through basically the vulnerability counts and pace of remediation. So somewhat technical but still very measurable means of sort of saying, “Okay, are people trending up? Trending down? How many vulnerabilities do they have proportional to the amount of work that they have?” And then they’ve measured – They’ve mapped out many, I think 20 or so if memory serves me right, different security practices. They’re tackling a subset of them as they evolve.

But there were all sorts of tidbits that he pointed out about ineffective practices. Two that stayed with me are, one, people that had or sort of groups. So, he’s comparing different groups within Comcast that have or have not embraced a certain security practice. Those that have embraced a security practice of finding vulnerabilities but have no structured program for fixing those vulnerabilities are actually worse-off in terms of their eventual security posture and ones that didn’t embrace any vulnerability discovery at all.

So you’re better off not looking for vulnerabilities than having a vulnerability program that doesn’t have a remediation plan built into it, which is interesting. It slightly makes sense when you sort of stop to think about it. But whether it’s the cycles involved or it’s a sense of ownership because they just diminish it. But he has kind of clear data to show at correlation, which I thought was fascinating.

And the second one, which is also not intuitive, is they showed basic training, which everybody had to go basic security training all the developers had to go through, and they’ve had a more advanced training where you had to pick someone to do this more rigorous security training and it wasn’t – Like the manager had to pick someone on the team and send it. That more advanced training ended up reducing the overall security posture of that team. The basic training improved it. They did a lot.

And so, they saw that data. The theory was, and this is kind of his words, is basically when you had to pick someone and take them away from daily work and do investments and the managers had to pick the individual, they didn’t necessarily pick the most capable individual on the teams to kind of run that type of test. They might have sent whoever was least damaging to sort of lose cycles for that person to go to this deep security training. But then once they did, too much of the responsibility fell to that person. The rest of the team didn’t shake it off. Now, granted, again, that’s theory for that first bit of the result is data. It’s fascinating, just counterintuitive evolutions on measuring security and the effectiveness of these different controls.

So we’ve talked about our current state. We talked about some practices and whether they are good or not, although we touched on people because collaboration was a key component of the practices that seem most effective and also maybe people have made some decisions around embracing a security control that they don’t necessarily think is effective. But another big question that comes up in the world of DevSecOps and developers and security is one of ownership, of who should tackle or who should be the lead or take different responsibilities for these security concerns. So I think like each one of these sources or these data sources, what we have here, have touched on this element of ownership. I think, Gareth, maybe we can start with you this time. What is – Did you ask any questions around sort of infrastructure as code security and who should be responsible for it?

[00:28:03] Gareth Rushgrove: Yes. Interesting you said should, because we actually asked around like sort of both the hypothetical and sort of in a perfect world, given that things are evolving, given this desire. Frankly, there was a desire to shift left when 77% of respondents said like, “Yeah, this should be moving to the application teams.” The reality was much messier. Roughly speaking, they split pretty much into thirds between some organizations where it was the main preserver developers, others had sort of central platform or DevOps teams or structure in place in the middle. For others, this was very much like an infrastructure target domain. It was someone else, somewhere else. But in lots of cases, it was also the responsibilities crossed boundaries. I think, most respondents, as best as you said, it was the responsibility of more than one party. So, organizations of security teams often have both those involved. Others had like more spread-out sort of organization structures.

I think the difference between the desire and the reality I think was probably the most interesting thing that came out there around like this sort of people angle on there, like looking at infrastructure as code. Again, I think it, again, makes sense in the sense that like sort of all of these organizations, they’re at an early stage of adoption, and it’s often been incubated in one place. But it’s also interesting that desire is to head in the direction of making these things addressable to developers, which sounds like a good thing given some of the data points we’ve talked about already.

[00:29:53] Guy Podjarny: Yeah, indeed. Just earlier in the process.

[00:29:56] Alyssa Miller: Yeah. I mean, it’s fascinating because I almost look at it differently, right? I think when we talk DevSecOps, really what we’re looking for is that culture. I always refer to it as that shared responsibility culture, where you’ve got these three disciplines that we officially lump into DevSecOps, and also I’m a really big proponent of bringing the business into that as well. I think we’ve seen, especially as cloud native technologies, infrastructure as code, containers, all these things come up, and more and more developers are being charged with creating that, and somewhat rightfully so, we put the responsibility on those application developers to secure those things. You’re the one creating it. We know, from two decades of talking about push left, yeah, we want to get early. And to do that, we need the developers to take care of this.

But I also think back to a really great tweet I saw from Kelsey Hightower a number of months ago where he says, “If you want to roll your own application platform, here you go. All you need to know is Linux, and Kubernetes and Docker.” And he just rattles off this laundry list of all the technologies, and it’s like it gets you thinking, we’re putting an awful lot of onus on one group of individuals that are a part of that pipeline. The reality is we want to see that spread and we want to see all of those. I mean, that’s the point of DevSecOps is really that we’ve got all of these groups of individuals who’ve broken down the siloes now and have come together to work together. So we asked in the State of Open Source Security, “Who should be responsible for the security of your software and the security of your infrastructure?”

Now, last year, when we asked that question about software, we didn’t ask the infrastructure side. We asked just the software side. It was actually a little alarming. We saw exactly that. I think it was 85% said that developers should be responsible for the security of the software. Yeah, that’s great. But then only 23% thought security had a role in it and only 21% thought infrastructure had a role in it.

I’m like, “Wait, shouldn’t those all be approaching 100%? We want everybody involved.” This year, when we asked it again about software in particular, it was much more encouraging. We saw that same 85% for developers. Okay, great. I’m kind of curious what the other 15% see as developers are all on security. But we did see security come up quite a bit, so 55%. Operation still kind of gets forgotten about on the software side. They’re down there at 35, and I guess to some degree I can kind of accept that because how much of a role do they play in the actual application itself? Maybe not as much. But when we think about things like the threat modeling and some of those collaborative pieces we’d like to see, that's where we like to see that rise.

On the infrastructure side, it was a little bit better. It was basically 63% said developers, 56% include security team, and another 56% operations. I'm a little surprised operations didn't go way up. It’s telling that like we are seeing attitudes change a bit where the idea of breaking down the barriers of those silos and bringing everyone into that kind of common focus that at least that's starting to grow. After 12 years of doing DevOps, it’s good to see that happening. But I’m kind of curious too from Alanna's perspective because I know you guys – I think you guys had some data on this too around some of that.

[00:33:26] Alanna Brown: Yeah, we did and we actually asked the question about security and shared responsibility. I don’t love the way that we asked the question because I think it can be misinterpreted. But I will share the data because I think it's interesting. We asked, “Do you agree that security is a shared responsibility across your delivery and security teams?” I think a positive outcome is that as security integration goes up, as it’s more deeply integrated, that level of agreement goes up as well, so no integration. Only 58% really felt that security was a shared responsibility. Versus at full integration, it was at 89%. So that’s a 31 point difference, and I think that is a positive trend and I think it totally makes sense. The more integrated security is, the more likely everyone is to feel that it is a shared responsibility.

Now, I do want to note though that like you go to these DevOps conferences, and everything seems so rosy, and the path seems to straightforward, but it’s really not. Integrating security is actually really messy work and especially in those middle stages after you've already tackled all of the low-hanging fruit.

We call this – This is called the J curve. If you can imagine the Nike's Swoosh symbol where things kind of start out pretty high or getting some quick wins and then they quickly dip down because then you’re starting to unpack all of that complexity, and it gets really messy. You kind of have to muddle through things in order for them to get better again. That’s called the J curve, and we ask teams specifically if they encounter friction when they're collaborating between security and delivery teams, and the friction actually tends to get higher in the middle as people are collaborating more. It does start to decline at the higher levels of integration but it never fully goes away.

Nigel Kersten and I, he’s one of the co-authors of the report, we debated about this and we believe that it's productive friction at that point but we don't know. It’s just a theory. Then when we looked at the people who are in security roles versus non-security roles, the friction is like off the charts for them during those middle levels, and I think we all have to understand that DevOps is fundamentally a cultural change, and so different teams are going experience that change differently. If you're a an IT leader or a security leader, I think you have to be really cognizant of that to help your teams move through that change.

[00:36:14] Guy Podjarny: Yeah, that’s super insightful. It’s hard. Sort of culture change is hard, and I do think that as we talk about, “Hey, developers should embrace security. Developers should pick this up,” we don’t always recognize. I think in this specific podcast, we have a lot of security leaders. So I think there’s maybe some empathy and some recognition of the difficulty in the change on the security side as well. But still, there is a lot of – I think sometimes there’s a tendency to sort of glorify the developer part and how they’re sort of embracing security and not appreciate the challenge that the people maybe on the governance and security side need to face in the process.

[00:36:53] Gareth Rushgrove: I think it’s especially given like the evolution of some of the words we’re using, so talking about operations and talking about infrastructure. You can get quite meta into what is infrastructure. What is operations? What would we mean? What is the application? All these words that you can take for granted. You can have a conversation without even remotely thinking about or agreeing on the definitions.

But actually, if you step back, especially from sort of cutting-edge perspective like today versus 10, 15 years ago, the words have very different meaning, and I think one of the things we see in surveys and conversations is you can ask a question and it means different things to different people. There’s not really a way around that. English isn’t the ideal language. Maybe you can avoid it in languages that are more precise, but we just reuse the words, and I think that’s interesting. Both from a data perspective, it makes that job harder. But also from that empathy perspective, like what is an application? I’m an application developer. Are you doing the same thing as you were 10 years ago? Yes, you’re building application. What’s the scope of that application? That’s changed.

Yeah, we brought – Often application developers, I guess we talk about application teams. Those teams have changed. Those teams have people who would have traditionally been in a security org separately somewhere else in the building and they have people in them from – Who would have traditionally been down the road in IT. I think that’s interesting. It’s somewhat harder to get to into the data with these questions, but I think it shines a light on some of it in an interesting way.

[00:38:37] Guy Podjarny: Spot on, and infrastructure as code is case in point. It’s, “Hold on. Should you secure it like infrastructure? Or should you secure it like code? Which of these practices,” and filling those apps. So there’s been a lot of data floating around here and I think different opinions and I really like how the data points in similar directions, and there’s a lot of verification of it not always in the same direction. Sometimes it’s because we can ask a question in different ways and sometimes because data is messy and it’s complicated and the same question can have two different answers if different people are answering it or depending on the nuance of the question.

Maybe just to sort of close off here, and I think probably most listeners are at the edge of their sort of data capacity, I like to ask every guest coming on the show for sort of one bit of advice to think, “Okay. If you want to sort of be set on a path to improve, what would sort of one bit of advice be?” Ideally, data-driven but I think this is a place where we can slightly deviate and be more opinions or sort of interpretations of data here. So maybe we’ll do like some quick advice from each of you. Alanna, why don’t we start with you?

[00:39:48] Alanna Brown: Well, I ****think what the data actually shows us is that we need a more holistic approach to security, and it’s one that integrates security throughout the software delivery lifecycle starting with identifying security requirements during requirements phase, following secure coding practices and standards, continuously testing for vulnerabilities and automating those tests, comprehensive logging and monitoring and production. And so it’s a whole bunch of things. But in totality, I think all of this really relies on good collaboration, sharing, automation, measurement, and basically all the foundations of good DevOps practices.

[00:40:31] Guy Podjarny: Definitely. Basically, if you do DevOps well, you’d be on a much better path to sort of doing DevSecOps well in your process. Alyssa, what do you think? What would be your bit here?

[00:40:45] Alyssa Miller: Yeah. So, I’ll actually build on what Alanna was saying, because this is something I’ve been trying to stress especially in the security community where DevOps kind of ran off and did its thing, and we were kind of caught on our heels. How do we get involved? How do we keep ourselves a part of this? It really comes down to we’re talking about this integration. You notice when Alanna said that, she didn’t say put security between those phases. Still, in security, we tend to think about implementing security in the pipeline as gates between phases of the pipeline. I mean, I actually just read a blog on it where they actually use the word gates. It’s like, “No. Those are the things we need to break down.” So that’s where the automation comes into play, for sure.

But also, just that collaboration of individuals, having functionally people working together inside each of those phases. So, security is no longer this separate thing that occurs at the end of a phase. It’s integrated into it, and now we’re enabling the people that are responsible for those phases to complete those security practices, whether they’re security practitioners, developers, business people, whomever they are. That’s what I think in particular security perspective we really need to be focused on, because that’s what’s going to get our practices adopted as part of the pipeline.

[00:41:59] Guy Podjarny: Love it. Have to sort of rethink it. Sort of change the way you think about it, not as security in between but rather weaved-in and sort of a natural part of the fabric. Gareth, how about yourself?

[00:42:12] Gareth Rushgrove: Yeah. I would drill down all of that on – One of the things we saw with infrastructure as code work was some of that sort of low adoption and sort of reticence around security [inaudible 00:42:24] security around that was an expertise problem, and people had low confidence. They were talking about sort of lack of training, lack of mentoring, lack of support and guidance. I think that’s a big opportunity for probably not just that area but any sort of new. If you’re doing something new, appreciating early that it has security implication. Nearly, everything does. And I think filling that gap.

How do you fill the expertise gap? If you go from having like one person who’s going to be super busy as you scale something up to scaling that expertise across a number of people, they can do that again. I think for new areas focusing on expertise and like really making that available, your story of sort of not working in certain ways and finding like what works, finding how to measure it I think is very interesting. But, yeah, how do you get expertise broadly applied in a way that actually then makes the rest of it later easier?

I think to our point of like collaboration, if you can come up with good mentoring programs and training programs that bring people together and put them in the same room, that connective tissue can be super useful later. They’ll learn the same things and they’ll learn them together. Hopefully then, they apply the same thing and they apply them together.

[00:43:50] Guy Podjarny: Yeah. Perfect. You basically lend each other’s expertise and work together. So this has been excellent. Thanks to all of you for collecting this data and sort of securing it and for coming on to the podcast. All those of you tuning in, I hope you found this – The data itself and the format interesting and I hope you join us for the next one.

Up next

New Playbooks For Security With Lucas Moody

Episode 76

New Playbooks For Security With Lucas Moody

View episode
Collaborating On Solutions With Andy Steingruebl

Episode 77

Collaborating On Solutions With Andy Steingruebl

View episode
Approaches To Security From Across The Industry With Sacha Faust

Episode 78

Approaches To Security From Across The Industry With Sacha Faust

View episode
Training Security Champions With Brendan Dibbell

Episode 79

Training Security Champions With Brendan Dibbell

View episode
Four Years On: Reflections From Our First-Ever Guest With Kyle Randolph

Episode 80

Four Years On: Reflections From Our First-Ever Guest With Kyle Randolph

View episode