Skip to main content
Episode 70

Season 5, Episode 70

Transforming Comcast Using DevSecOps Practices With Larry Maccherone

Guests:

Larry Maccherone

Listen on Apple PodcastsListen on Spotify Podcasts

Security teams often adopt an untrusting and policing approach to development, creating confrontational relationships that only increase risk. For many companies, this culture of gatekeeping prevents the adoption of DevSecOps practices. But now the data is out! Having used agile practices to integrate DevSecOps into Comcast’s development cycle, Larry Maccherone has shown that DevSpecOps significantly reduces risk. On today’s episode, our conversation with Larry focuses on his experience transforming Comcast’s development team. We open by talking about Larry’s career and how he’s learned the importance of visualizing data in order to explain his research. Larry shares the pushback that he experienced from security teams when implementing DevOps practices and how getting this approach to work involved a gradual onboarding process. We discuss the challenges that arise when you follow some DevOps practices but not others before diving into Larry’s research. Despite having results that prove the value of DevSecOps, Larry talks about the unique problem that, “You’re never a prophet in your own town,” meaning that people often fail to recognize innovation when it is developed in-house. Near the end of the episode, Larry talks about cloud tech before giving advice on taking your security to the next level. An episode filled with insights, tune in to it and learn how you can transform your dev team.

Partager

[00:01:23] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer and thanks for tuning back in. Today, we have a really forward-thinking guest in this world of DevSecOps and we'll dig into a lot of his talks. We don't have time to dig into all of them, unfortunately, which is Larry Maccherone from Comcast. Larry, thanks for coming on to the show.

[00:01:39] Larry Maccherone: Oh, thanks for having me, Guy.

[00:01:41] Guy Podjarny: Larry, you're a distinguished engineer focusing on DevSecOps transformation at Comcast. Can you explain to us a little bit about first of all, what does that mean? Maybe take us a little bit back in history of how did you even get into security.

[00:01:54] Larry Maccherone: I’ll take the second part of that first, then —

[00:01:56] Guy Podjarny: — Sure. Go for it.

[00:01:58] Larry Maccherone: I'm an electrical engineer, computer science graduate from Virginia Tech. Started my first business while I was still an undergrad at Tech, actually. Grew it to 20 million a year in sales, 80 employees. We were in factory floor automation process controls. Our biggest client was GE Power Generation. At one point, 60% of the world's power generation was actually being controlled by software we had written.

If the bad guys could exploit a vulnerability in that, it could be really bad. You could bring down a power grid. We got really good at writing essentially vulnerability-free software. I spun out a second company, my second startup as a matter of fact, basically packaging this framework for other people to use it. Carnegie Mellon got wind of it, invited me to be a founding director of their CyLab there, the cyber security at Carnegie Mellon. That's how I got started.

[00:02:47] Guy Podjarny: Yeah, that's very cool. In those cases, like, that first role wasn’t security minded. It was functionality, or building that technology and security. It was just an important piece of it, which lured you into actually honing in on security as the actual profession.

[00:03:02] Larry Maccherone: Correct. I'm a developer – I was a developer back then. I'm a developer now. I write code everyday. I've got a dozen open source projects, one of which gets a half a million downloads a month and I still actively process all the pull requests and add features to those things. I stay very current with my technical skills, which I think is a big help in trying to get developers to do it differently. You have to relate to them and that helps.

[00:03:28] Guy Podjarny: Build that empathy. Yeah, for sure. Just before we leave that path, your history does point you out as a data analyst and focusing on that. Was that in between, was that part of the security journey, or was that a deviation off to the side?

[00:03:41] Larry Maccherone: People think of me as an expert in various different fields; process controls. They think of me as an expert in cybersecurity. They think of me as an expert in agile transformation. The really underlying theme for all that is that my particular angle on it is always — what is the most effective measurement and visualization you can use to further those causes? I've been a specialist in that. That's the deep knowledge I have that I have used over and over again.

The new meta capability that I've grown in the last decade or so is really around studying the psychology of developers and the sociology of development teams and how you leverage those things to get people to change. It ties in really closely with the data side, because people respond to correlations that are visually very impactful to them. You have to have the visualization to get them to move to the new place you want them to move to.

[00:04:38] Guy Podjarny: Yeah, absolutely. I mean, it's amazing how properly describing something, visualizing it, choosing the right taxonomy, creating — you create the mental model that the people can do and you can take the exact same actions and if the mental model is flawed, or if it's not as compelling to the listener, or if it's just not clear, you're not going to get anywhere near the same results. I think perception is reality, and figuring out how do you present it, how do you guide it, what is it that you measure — is a core component of driving to results.

[00:05:09] Larry Maccherone: I think that's been a fundamental problem in the security world. When the security folks use a term, or an acronym that they assume everyone knows, in front of a developer. Best case, the developer says, “I don't know what you mean by that,” and you get a chance to explain it. There's some cases where the words actually have different meanings to a security specialist than they do to a developer. They frequently go down the wrong path when you aren't very careful about that language that you choose to use.

[00:05:37] Guy Podjarny: Yeah, absolutely. Let's dig into this transformation and this change. You've given a great talk at RSA that I'm actually going to dig into some aspects of it. I found the talk fascinating as a whole and we'll put links to it in the show notes for people to watch the whole thing. The talk revolves around a lot of, “How do you measure and visualize and transform Comcast?” Which I imagine holds also your perspective on the market as well in this journey to embrace DevSecOps, or get developers to embrace security.

If you don't mind, let's maybe dig into that. Can you tell us just at a high-level, what is this project like? What is the initiative, a little bit context the inside of Comcast? What's the driver? What's the core philosophy before we dig into the details?

[00:06:17] Larry Maccherone: Yeah. Security tends to think of vulnerabilities in a security group. They tend to ding the team, or police the team, or gate keep the team with this list of things. It's very expensive, because there's a lot of follow-up. It's very ineffective and it's very confrontational. People on the security side get burned out fast and cycle through. The teams never really feel like they're helping. They always feel they're the enemy.

The whole idea of agile transformation, at least the way I conducted it when I did that as a consulting job and DevOps transformation is you really have to build trust with the team, assume that they want to do the right thing, and then they just need help understanding exactly what that is and a gradual learning curve. You don't want to give them all of security on day one. You want to give them three practices that they could focus on in the next few sprints that they could implement in their process. Then you want to come back and coach them on the next one.

The whole program I developed around that included a way of measuring it, a way of visualizing it to a single team and aggregating it for a whole org. That whole concept I developed for agile transformation. Then Noopur Davis who's the CISO at Comcast, saw me give a presentation on that and essentially said, “Come work with me at Comcast. This is the right approach to getting development teams to change the way they behave. The way we're doing it right now is just policing them and beating them up and it's no fun.”

[00:07:47] Guy Podjarny: How receptive was the team? It sounds like the CISO was on board, or in fact, driving for that change. Did it require much convincing amidst the security team itself? Was there a lot of jolting and how did that go?

[00:07:59] Larry Maccherone: Yeah. I got the senior executive leadership of the company, Comcast, pretty onboard really fast. I always connect easily with the lowest-level development teams. Frequently, they have engineering managers, maybe two or three levels above them, that get it and really say, this is the right way to do it.

The middle-level management, the ones especially that didn't come from an engineering background have a tough time with this approach. They really want to take a more rigid policing, or governance approach. The security people absolutely rebel. It just doesn't match their mental model at all. They don't trust the developers. They start with that concept that they're going to put stuff out there that's going to get us hacked. As soon as you think that way, and all the language you use and all the way you interact with folks, that comes out and they sense it immediately. You have to start with this alternative approach.

I think the thing that helped the most is the pledge that we came up with. I didn't actually have that before I got to Comcast, unlike a lot of the concepts. I found that I was having trouble getting the rest of the security group to essentially get on board with this mission, and so we developed this thing called the pledge.

Noopur at some point said, “You've tweaked it enough. This is the way we want to do business. Everybody must get onboard, essentially,” and so that's what happened. The pledge essentially starts with what I just said. We trust you. We trust that you want to do the right thing. We understand that you may need help understanding what that is. Then more importantly, we've got to make it easy for you to do the right thing. We've got to give you the easy path, that the right thing to do should be the easiest thing to do actually. Our role is not of policing and gatekeeping. It’s tool smithing and it's coaching. The tools we provide you better be really easy and consumable, get with your mental model, fit with your pipeline, etc.

[00:09:48] Guy Podjarny: Yeah. I think that's a powerful shift in reception, so defining the pledge sounds very valuable. Once defined and you told people, “Listen, you need to get with the program,” did people then accept it? I mean, how much transformation do you need to do on just the mindset side of the security piece for this change to work?

[00:10:07] Larry Maccherone: Yeah. The security side, it's a constant battle still today, but it's more and more people over time. This isn't the first time I've done this. If you remember back 15 years ago or so when the agile movement came about, the quality, the QA organization, the dedicated QA organization, the testing specialists have reported up differently that didn't report up through the product people. They resisted the agile movement, because the agile movement essentially moved a lot of the responsibility for security on to the development team.

That's actually how I define DevSecOps. It's empowered engineering teams, taking ownership of their products all the way to production, which includes security. I have the same definition for DevOps as I do for DevSecOps. That is a constant struggle that I have to get the security folks to either support in an ideal world, or just let me go and then we'll see the results in the end.

Results are really coming in. I mean, just the last few months, I can point to data that shows an 85% reduction in risk for teams that are onboard to this. That's really hard for people to resist that. It's a very high bang for the buck. I only have 16 people in my team and we have a 400-person security organization. It's incredibly highly leveraged.

[00:11:24] Guy Podjarny: That sounds amazing and we're going to dig in. I do love the QA team analogy. I draw a lot of analogies to the DevOps transformation and talk about it to an extent. Agile is similar but not the same. Overlaps, but it isn't exactly the same.

[00:11:38] Larry Maccherone: I don't think you can do DevOps without having already accepted most of the concepts of agile itself.

[00:11:43] Guy Podjarny: Correct. But DevOps probably takes it further. It's also loops in, or ropes in not just a methodology, but also maybe a scope of responsibility. You could maybe say that QA pulled in more responsibility, or the agile pulled in more responsibility indeed for the quality of what you build and then maybe DevOps pulled in a little bit more around operating it and running it.

[00:12:03] Larry Maccherone: Exactly. Exactly. I agree.

[00:12:05] Guy Podjarny: I also love that analogy, because of the same type of resistance and changes. I mean, once you draw these different analogies, it's easy to say, “Okay. Now let's follow the same footsteps, because it worked in the past.” With that, let's dig into that data, because I think you're quite unique in having these elements. Tell us a bit, how are you measuring the success here?

[00:12:25] Larry Maccherone: I describe a little bit of how we engage with the team. We have a list of about 45 practices, things that various different security experts think are good things to do. We have prioritized those. Essentially, we've organized them. Here's the ones that are the best bang for the buck. Those are the ones we talk to first about the teams. We have a temporal history of teams that weren't doing practices. Then they adopt a practice, or two, or three or four. Then we also can correlate an incident and network originated scan data to their moment in time.

That's how we did the research. We basically looked at teams that were doing this practice, versus teams that were not. Then to the degree we had temporal data, we looked at teams that switched from not doing the practice to doing the practice. We measured the impact of the individual practice adoption on the lowering of those indicators of risk incidents and network originate scans.

[00:13:26] Guy Podjarny: This is across all teams, or you're talking a sub selection of the teams that are practicing DevOps already, or something like that?

[00:13:35] Larry Maccherone: Our goal is to onboard every development team inside of Comcast, about 500 development teams, about 10,000 developers total, including contractors. We don't have them all yet today. We have about 240 today that are fully onboard. We have another 130 or so that are in the queue. By the end of the year, we'll be at close to 80% saturation. It might take a long time to get that last 20%, because the thing that's mostly missing with those last 20% is that they aren't DevOps first.

Now I’ve been doing this for four years now. A lot of the teams that I started engaging with I was like, “You're not quite ready for this concept. Here's what you need. You need a pipeline, a CICD pipeline. You need automated testing in the pipeline. Then come back to us and we'll get you going.”

[00:14:22] Guy Podjarny: Maybe another step on that maturation that we talked about before in terms of that agile and then DevOps and then DevSecOps. DevOps is somewhat of a prerequisite for most DevSecOps activities.

[00:14:33] Larry Maccherone: It is. It is. Yeah. Agile is somewhat a prerequisite for most DevOps.

[00:14:37] Guy Podjarny: For DevOps. Okay, cool. You have these fortified metrics. First of all, maybe can you give us four or five examples of the types of metrics and then maybe we can talk a little bit about, how do you measure those.

[00:14:47] Larry Maccherone: We 11 — we considered an essential 11 for 2020. Last year 2019, it was the necessary nine. This is a moving target intentionally. We declared victory on one of the nine from last year, meaning the whole organization had adopted it and we added three others, so that's how we got to 11. Nine minus one plus three.

[00:15:10] Guy Podjarny: You've mapped out the whole 45 and you just chose. Did I get that number correct?

[00:15:14] Larry Maccherone: Yeah, that's correct. It is 45.

[00:15:16] Guy Podjarny: You’ve chosen, of those nine at first, and then three others now too. Okay.

[00:15:21] Larry Maccherone: We're pushing the envelope of adoption out into the organization, so we’re spreading that way, but we're also spreading the scope of the good practices that we're really expecting everyone to onboard to.

Let me describe a couple of them. The ones that are probably going to ring the most true to you guys are things like doing analysis for code imported in the pipeline. Guy, I know you've got a background in static analysis, task tooling. We consider that analysis for code written, because it's basically trying to find flaws in the code you wrote. I actually believe it's a 20X better bang for the buck to focus on analysis for code imported, which is more what Snyk is doing and I know you're associated with them.

The analysis for code imported, you have to put it in your pipeline to automatically run and that gets you one of the practices. Then you have to get the initial set of findings to zero that gets you another set of the practices. Then you have to turn on branch protection status checks in your configuration for your pull requests and that gets you the third practice associated with analysis for code imported and then the same stack for analysis for code written. We take a little bit of a different approach on that one, but yeah, it's the same basic idea. That's two plus two, that's four of the 11 practices I just talked about right there.

[00:16:48] Guy Podjarny: Okay. They're very concrete. You ran it. I guess, how important was it to make them black and white? You've measured, you've achieved, you've not achieved.

[00:16:58] Larry Maccherone: Yeah. Well, we found when we started this we weren't measuring it this discreetly. We weren't coaching this discreetly. People were taking credit for just running the scan, but never resolving the critical and high findings from the scan. They were spending more and more millions of dollars on licenses and spreading this further and further in their part of the organization. I was like, “Stop. We get zero value.” In fact, it’s negative value and this is what the research showed, actually. Teams that just run the scans, but don't actually do these two other practices - this high-severity clean and only-merchant secure code - they actually have a greater risk of something happening in production than folks that either don't do anything, or ones that do all three of those things. That was an interesting finding.

The mental model for project managers is “okay, we have 50 teams in our part of the order. Let's go get those 50 teams scanning. We'll worry about resolving the results of those scans later” and that later never happens.

[00:17:56] Guy Podjarny: Yeah, because it's hard enough to get that 15. I'm a firm believer on it. I like to say that it's a part of that switch from auditor to developer, because arguably an auditor’s job is to find an issue, but the developer's job is to fix the issue. If you want developers to embrace this, you can't just create problems and you need a solution that actually aligns to it. I also fully agree with the value to the business, which is, until you've fixed it, you've created very little value. There's some value in visibility, so you can choose which issues to fix, but some issues must eventually be fixed for you to actually improve this.

[00:18:28] Larry Maccherone: I argue there's actually net negative value in even knowing. I know people like to say, there's some value in knowing, but I think that as an excuse that gets used. I don't think it's actually true. I think that from a legal perspective, if you know you have all these problems, but you have a history of never resolving and you actually have a greater risk from a compliance perspective than if you didn't know, if you didn’t have a record of them.

That's a legalistic thing. The part that really makes it a net negative for me for the most part is the energy, is the investment, it's the time. It's what people focus on. It's what we spend money on buying these licenses. You run out of time and money and effort and enthusiasm and budget to actually get people to work on resolving. You want to take one small part of the organization, one or two teams maybe even, get them all the way to high-severity clean and only merge secure code before you pick up another set of the organization.

It's not that discreet. We have a very continuous funnel going now. Our goal is to get every team that gets into the starting point of that funnel to that really mature level as rapidly as possible. It's not to do this one half part of a practice that's net negative value across the whole organization, then we'll come back and do the thing that actually starts to produce value. Because you run out of budget and time and energy before you get there.

[00:19:52] Guy Podjarny: Yeah, it makes a lot of sense. We're probably going to drill in a little bit on these measures, but before we do that let's complete the picture on the data. You talked a little bit about what are the types of things that you measure. How did you assess risk in this model?

[00:20:02] Larry Maccherone: These are the things that if you did this, then your risk would go up or down. The Y-axis is some proxy for risk. We had an okay proxy for risk for the first round of research we published at RSA. We're working on and we'll have much richer indicators of risk when we update this hopefully for the next RSA. The ones that we had for the RSA one were network originated scans.

This is essentially stuff that's in production, that's on the wire, that's on IP addresses that are associated with Comcast, that if you just inspect that IP address on a port scan, you can actually find some vulnerabilities. This is the way a lot of attackers start their attack. If you give them a lot of attack surface there, they have a lot better chance of succeeding in their exploit. If you don’t give them any attack surface there, then you greatly reduce the risk. That was the primary proxy. We had a lot of data from a lot of teams for.

We had 158 teams worth of data we could correlate between the practice adoption over time and actual network originated scans. Then we had a sparser incident data. There was an actual exploit and we captured it in this log and we had to shut something down or do something differently to recover from that. It was a whole incident management process that got executed for that. We should have a lot better incident correlation next year than we did in the past.

[00:21:33] Guy Podjarny: Although, I do love the iterative approach, just like you can iterate on the measures that you're coming to, you can also iterate on measuring risk. You measure something, as long as it's directionally correct, you've made an improvement on it. Ending the suspense a little bit, tell us a little bit about the insights from the data. You're measuring it. You've got this big data, big budget data here. You're measuring it to risk. What did you learn?

[00:21:53] Larry Maccherone: A lot of what we learned confirmed what the experts have been saying. That's not that interesting, but I'll list them as quickly as I can on off the top of my head — what they are. Only merge secure code, high-severity clean. These are two of the practices that are dependent upon scanning, but scanning alone doesn't get you them. We're very high. Conducting pen testing activities on a regular cadence, getting threat modeling done on a regular cadence, doing secrets management effectively, having a strong process for dealing with external to the team reported vulnerabilities. Like the group that runs the network originate scanning will lob findings over to the team and some teams just have a good process for dealing with that and other teams just have a history of it taking months for them to respond to those findings.

These are all practices that turned out to be highly correlated with risk reduction. If you did more than a handful of those, the overall risk reduction from the teams that adopted most of those practices was 85% compared to the teams that had adopted none of those practices. It's pretty dramatic.

[00:22:59] Guy Podjarny: Are you measuring all 45, or you're just measuring the nine and then the 11?

[00:23:04] Larry Maccherone: Yeah. We don't have enough data on all 45. We have data on the ninth from last year and the additional three from this year and a handful of others that we have enough people that have adopted it that it reaches statistical significance, but we do not have data that we could call academically publishable on all 45 at this point.

[00:23:25] Guy Podjarny: Yeah. Did you use this data then to go to those teams that are not applying those practices and tell them, “Hey, you're going to be high-risk.” I mean, there's high-level reestablished executive mandates to it, but in the trenches when you go off and you talk to those teams, was this convincing to them? Do you feel minds have changed?

[00:23:43] Larry Maccherone: Yeah. I think so, but I think that the reason for that is subtle and nuanced. Have you ever heard the phrase ‘you're never a prophet in your own town’?

[00:23:52] Guy Podjarny: You're always the hero in your own story.

[00:23:53] Larry Maccherone: You're always the hero in your own story. I never had that in real life. The ‘never a prophet in your own town,’ I thought it was really interesting when I got my first startup. When I would go travel 30 miles down the road to GE Power Generation, they’re like, “Oh, that guy is just from Blacksburg. He's right here. He can't be that much of an expert.” If I would travel to Houston with — for Exxon was a client they are like, “Oh, well he came halfway across the country. He must be a real expert.” Then if I went to Europe, they're like, “Wow, they came all the way from America. You must be a super expert.” 

You literally have to overcome this. The worst case scenario is that you work for the same company as the folks that you're trying to influence, because they're like, “Oh, that's just a guy down the hall. I don't have to listen to him. He doesn't really know what he's talking about.” That’s sort of a fact. This is why I do podcasts like this and this is why I give talks at RSA is that I actually get a lot of folks who I've never met at Comcast come up to me, send me a message on the Comcast internal email saying, “Hey, I saw your talk at blankety-blank.”

When I get introduced by those folks to the rest of their part of the organization, it's instant credibility. The data is a big part of that, because they’re usually inviting me in to redo the RSA talk essentially for their lunch and learn for their part of the organization. I don't ever beat up individual teams with individual data. We do celebrate very healthy resolution curves, for instance, we broadcast that widely and say, “Hey, in three months they resolved 40,000 things and they've kept it at zero for the last two and a half months already. Look at this cumulative flow diagram. Look how pretty it is, etc.” We never really ding people with data. We just —

[00:25:33] Guy Podjarny: Yeah, let’s celebrate. Successful, so you're still using it to encourage action just in a positive fashion. Another DevOps mindset versus the common security practice that is a bit more stick than carrot. Can you tell us a little bit about some findings that surprised you a little bit that were not what you expected? You talked about the obvious ones, what gems were in the data that changed your view, or that you weren't expecting.

[00:25:55] Larry Maccherone: This is the really interesting part. I don't know if you saw the opening skit I had in the RSA talk. I pretended to be a Dick Tracy character and it's a whole investigation. When what you thought was true gets confirmed, that's okay. That's good. When you actually are surprised is when the story gets really interesting.

“My gal, the truth, she ain’t always kind,” is the tagline from that skit. The one I've already talked about is probably the most valuable one and it makes complete intuitive sense to people. It's not that surprising from an academic perspective, but from actually the way organizations tend to do it, they don't follow the thing that their intuition would tell them is the right thing to do — is that you can't just run scans. You have to actually resolve them.

The best way to resolve — to get into a habit of resolving them is to put it in the pull request and never emerge code that has any negative findings with your policy settings and you slowly get to change the dial on the policy setting. Yeah. That's probably the most valuable finding we had. Another thing that we found that was really disappointing really that we discovered is that the secure coding training was in that list of seven or so, really highly valued practices — getting every developer on the team through this two to three hour war game, man against machine — learn what OWASP top 10 vulnerabilities are for your particular tech stack.

We use a product from check marks called code bashing for that, it's just wonderful. We love it. That correlated highly with risk reduction. Pretty much the rest of our training program, we have a ninja program, we call it green belt and brown belt and black belt. We found that the folks that were going into that were not necessarily doing it because they wanted to improve the security risk of their team. We had a huge selection bias for the folks going into the green belt and ninja program.

When we asked managers, “Why did you pick Joe to go get the green belt for your team?” They're like, “Well, Joe's not really very good at anything else, so that's why we're going to send him over to the security people and maybe he'll be good at that.” That's not a good indicator it's going to be highly effective at changing the team's behavior.

The other thing is that maybe the selection biases in the positive sense, the other way, is that the teams that already know these what I would call intermediate level security skills, they're already doing them and they don't see the value in 40 hours of additional training to get more of that. Those teams essentially opt out of that, because they’re beyond what that training would provide to them. Yeah, that was disappointing and we're revamping that program right now to try to address some of those concerns. We're putting the emphasis on the secure coding training almost completely.

[00:28:40] Guy Podjarny: It sounds like super valuable learnings. It might be disappointing, but as you point out, it's one of those things that are most interesting because they can actually change your behavior, because they're not the obvious thing that you thought you're going to do. It's super valuable.

I have a question about the specific practices that you named and maybe I'm just not tapping to one of the 45. You talked about introducing, you talked about stateful things. You're introducing a test into your pipeline, you got your backlog down to zero. What are your thoughts about more of a graduate — there's a set of conversations that are more around, more the leaky bucket, fix the holes in the bucket first. So focus more on not introducing new problems, focus on the gradual improvement on it. Do you have any thoughts on that versus measuring the state? Is it more important to adopt a practice where you're not making things worse day-by-day? Or is it more important to use our energy to eat away at the backlog, or historical risk?

[00:29:39] Larry Maccherone: I used to have practices in the framework that were my security policy, my team security policy. The first setting of that policy dial was to stop the bleeding. Meaning, let's just focus on not introducing new things. Then let's move on to getting to high-severity clean. Its still, fundamentally its in the system, is in the framework. The theory behind it was pretty sound, is that the habit you get into by installing it into the pull request pipeline to block it is, so let's set the dial really low and get that automatic feedback mechanism going first.

In reality, we found it very difficult to do that. We found that teams wanted to see all of their high-severity findings and their management wanted to see all of those resolved. It was very hard to get them to even think about things like turning on a branch protection status check with a very low policy dial. Why would we have a very low policy dial? We wanted to hide all the high severity things done. It is very hard to get over that hump.

We literally switched gears and we now do this progression. Start the scans, automated feedback, get it to zero for the first time, turn on the branch protection status checks. We found that to be much more effective actually than starting with the branch protection status checks with a really low policy dial.

[00:31:03] Guy Podjarny: That's interesting. It sounds like the pushback from the dev teams, I presume, versus you said the teams there is — that the security teams, or with the dev teams that were pushing back.

[00:31:12] Larry Maccherone: It was really more when you go to present the outcomes to their management, at the dev team’s management that they were like, “Well, that's great. You have stopped the bleeding for these two or three OWASP top 10 things, but what about the rest of them?” That conversation was just stressful and unhappy most of the time, and so we switched to this model.

[00:31:34] Guy Podjarny: Yeah, interesting. Maybe goes a little bit alongside the same notion of wanting to scan everything and just this notion of still some bias in favor of breadth, I need to know all of my risk, or apply this in everything to zero before I apply —

[00:31:47] Larry Maccherone: We've been successful in basically saying, these are all your critical and high-severity risks and we're going to ignore the mediums and lows for now, but we were not successful in actually rolling out a stop to the bleeding practice.

[00:31:59] Guy Podjarny: This is a great practice. You're clearly very deep here. You have a dedicated team, you have executive support for this. I think you've built some software to help apply this. If someone listening wants to apply, wants to start down this path of using these models, using those practices, what would you say are the best first steps? What's the minimum for them to get going to start instilling those practices, other resources you would point them to?

[00:32:24] Larry Maccherone: I get this question all the time. The thing I start with is to decide to do DevSecOps transformation. That is the key thing. You have to get enough very high-level, read the blog posts, go to conferences, listen to Gartner and Forrester to be confident that you want to do this.

Once you decide to do that, the most critical thing you can do and this is where it falls down most of the time is — do not hire a security specialist-only person to lead this effort. You have to hire a distinguished engineer person, or better yet, and this is how I've been successful with most of the times is, put the 10 most respected engineers in a room and say, “Here's what we want to do. Here's the vision. Here's the concept. We realize that as security people, we won't have the credibility to pull this off. Are one of you or several of you willing to take the lead and do this for us or with us in partnership with somebody on the security side?”

That key into the engineering organization is really where most folks fail. They basically send somebody out there who's used to policing and may never get to cultural transformation. Never happens.

[00:33:42] Guy Podjarny: Yeah, that's a great observation. I think it comes down a lot of it to that empathy element and the question is how much empathy can you really achieve, and also, how much empathy do you get on the other side, the element of the most respected, your choice of words there, somebody that is already respected by the technology teams, by the engineering teams taking this on, they're just far better set up to achieve a transformation in how those things organize. They might be challenged on the security side. They might need a peer over there to help change minds.

[00:34:12] Larry Maccherone: Yeah. You know what? I found — and maybe I'm biased, but I found it's a heck of a lot easier to teach security background skills, practices to an engineer than it is to get a non-engineer to have credibility with engineers. That credibility on the engineer’s part is the key element. I pretty much only hire developers and I make sure they continue to be developers even after they come on my team.

We build a lot of tools that these teams consume and your job one day might be helping someone consume the tool by helping them integrate it in a pipeline. The next day you might pull a feature off the backlog to add to that same tool that you called them with. This we still do development is really key to our concept.

[00:34:59] Guy Podjarny: Yeah. I mean, fully agree. Before I have my typical final question, one more question for you. We talked about the practices. We talked about DevOps and how DevOps moves more of those responsibilities into these autonomous teams and they need to embrace some of this responsibility. The other thing that at least I talk about a fair bit — is this notion of cloud and how cloud changes the scope of the app tied in with the change in DevOps around independence. Now that independence is making decisions around infrastructure, like containers, like configuration, those elements, what's your view over there? Is it the same practice? Is it the same people? Is there another portion they're coming not out of maybe the app sec team, but rather the cloud sec team that needs to follow this? What's your perspective here, either personal or within Comcast.

[00:35:45] Larry Maccherone: Yeah. I think within Comcast, and what I've seen in the industry is that there are very effective dev-first thinkers in this “Let's go cloud leadership” at even large organizations that have been around a while at Comcast. I think ours is one of the best. I think the group that is leading the push to the cloud really gets it. They are fully supportive of this concept of DevSecOps.

If you're doing DevOps right, you're doing security anyway. It's a part of what we expect you to be doing, etc. I think the interesting thing about all of this and this is one you hinted at and I’m pretty sure you think about a lot, but you didn't really dig into, I think the really interesting thing about it is that when you get empowered engineering teams taking ownership of something like ops, or quality, or security, they do it fundamentally differently and they almost always do it with code.

You test, go from being manual testing suites to being automated testing suites. Ops goes from, “Here's a checklist of things you use to stand up a new server,” to, “Here is the code we execute to automatically provision a new VM, or a new container,” etc. The same thing with security. We want it to be more like security, like security X code thing. We really want automation. We really want direct feedback to the engineering organization. I think those are the keys there.

[00:37:10] Guy Podjarny: No, that's very well said. We've already gone way longer than we should and there's just so much more still to ask you, Larry. Really appreciate all the insights. I'll ask you for one more. If you have one bit of advice for a team looking to level up their security-fu, something they should start doing, they should stop doing, what would that be above and beyond all the great advice you've already shared? Maybe they need to look for that a little bit.

[00:37:32] Larry Maccherone: Well, let me take the meta question and I'm going to give the answer for two different groups. If the answer is being directed at a security group, a security team, the first thing they need to do is to figure out how they're going to A, trust their engineering teams and B, build trust with their engineering teams. I got this blog series called the Trust Algorithm for DevSecOps and it's really targeted at security specialists and security specialist groups. That's the number one thing I think they should do is figure out how they convince themselves to trust them and be how they build the trust back the other direction.

On the team side of things, let me go a little meta with you as well, because we've drilled down pretty low level into specific practices. I would say, learning by doing is a fundamental concept of agile. It is really a fundamental concept of DevOps and DevSecOps in my opinion as well. You have to put stuff out there and you have to see how it behaves in the real world and you have to respond to that feedback and you have to be somewhat deliberate about designing those feedback loops and those metric feedback systems.

The folks that are doing DevOps really well, let's say the SRE-oriented folks, they're completely metrics-driven by how they decide what to work on next and what they're going to do. I think that is fundamentally the key, is learning by doing and having tight feedback loops.

[00:39:03] Guy Podjarny: Yeah. Both sound pieces of advice. Thanks a lot. Larry, this has been great. Thanks a lot for coming on to the show.

[00:39:08] Larry Maccherone: Thanks, Guy. Glad to be here. Thanks for having me.

[00:39:11] Guy Podjarny: Thanks everybody for tuning in. I hope you join us for the next one.

[END OF INTERVIEW]

Up next

Leveling Up Security In Big Organizations With Geoff Kershner

Episode 74

Leveling Up Security In Big Organizations With Geoff Kershner

View episode
DevSecOps Data With Alanna Brown, Gareth Rushgrove, And Alyssa Miller

Episode 75

DevSecOps Data With Alanna Brown, Gareth Rushgrove, And Alyssa Miller

View episode