A secure organization requires a large amount of buy-in from beyond those immediately concerned with security. This can prove a challenge at certain companies and facilitating a shared vision of priorities is something that security leads should know the importance of. Joining us on the show to talk about his role and team at Pearson, is DevSecOps Lead, Nick Vinson. Currently heading up the team of engineers focussing on security, Nick has been a driving force in getting the company up to speed on the security front for the last couple of years. We get to hear from Nick about his longer-term history in DevSecOps and how he landed in his present role. From there, we dive into the ins and outs of general security as well as aspects specific to Pearson. Nick shares his philosophy towards team involvement and embedding security-focussed members, as well as unpacking Pearson's approach to security champions and emphasizing the importance of this work. We talk about the primary goals for Nick and his team, the importance of adoption and investment in this area, and Nick's perspective on the most effective ways to achieve this. Our guest also illuminates some specific practices around tests, challenges, and expectations, and listeners can expect to come away with some great insider knowledge on running forward-facing security. For all this and a whole lot more from Nick and Guy, be sure to listen in!
Episode 84
Season 6, Episode 84
The Future Of Security Teams And Champions With Nick Vinson
Nick Vinson
[00:00:17] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.
This podcast is sponsored by Snyk. Snyk is a dev-first security company, helping companies fix vulnerabilities in their open source components and containers, without slowing down development. To learn more, visit snyk.io. S-N-Y-K-.io.
Happy New Year and welcome back to The Secure Developer. To kick off the first episode of 2021, Guy Podjarny, President and Co-Founder of Snyk is joined by Nick Vinson. Nick is currently leading a team of highly skilled engineers in defining and implementing a security engineering function within Pearson and has been driving a DevSecOps transformation for the past two and a half years. Over the last decade, he has worked as a DevSecOps consultant, penetration tester, devops engineer and sysadmin in a variety of companies and sectors.
We hope you enjoy the conversation and don't forget to leave us a review on iTunes if you enjoyed today's episode.
[INTERVIEW]
[00:01:38] Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer. Thanks for tuning back in. I’m really excited for today's guest, who's definitely been making an impact in terms of DevSecOps in a large organization. That's Nick Vinson, who is the DevSecOps Lead at Pearson. Nick, thanks for coming onto the show.
[00:01:54] Nick Vinson: Thanks, Guy. Good to be here.
[00:01:56] Guy Podjarny: Nick, before we dig into transforming big organizations and what we do there, can you tell us a little bit about yourself? How did you get into security and maybe, all the way to what is it that you indeed do at Pearson today?
[00:02:08] Nick Vinson: Yeah, sure. Starting at the beginning, I got into security from an early age, when I was a teenager, watching The Matrix and Neo scanning the Internet with their map. I got playing around with lots of different security tools, that led me into Linux. and Yeah, with my experience there, went into working as a Linux sysadmin and network admin in my first few roles. Then transitioning through to devops engineer, as devops was emerging in the industry. Still with an interest in security, eventually going on to be a security DevSecOps consultant.
Then went into Pearson where I am now. Went in as the security lead for their Kubernetes next-gen platform as a service, something called the global learning platform.From there, became in charge of all of the security engineering within something called the product security office at Pearson. That deals with all of the applications, which we make, as opposed to enterprise security, which deals with the in-house supporting IT tools. Also, we have a secops team too.
[00:03:12] Guy Podjarny: That's really interesting, the path here. I guess, let me dig into a couple of words that you threw out there in terms of team, or title. One is you said, you started as cloud security and then you went into, I guess, your current title, which is DevSecOps. Is cloud security a part of that product security office? Is it perceived the same? How do you define those two roles in the context of Pearson?
[00:03:38] Nick Vinson: Yeah. I mean, I suppose the boundary between application and infrastructure cloud is obviously looser than it was before, because what infrastructure defined as code and application teams often managing and deploying that themselves. Really, it's all treated in the same in terms of security engineering. In our team, we're a flat team and we have areas of ownership and particular responsibility.
One of my colleagues, Owen, is responsible for platform security. We still have a demarcation there, but it tends to be a very, very collaborative effort and obviously, doing very similar techniques.
[00:04:14] Guy Podjarny: Got it. Okay, so there's a product security office. Within that, it includes both the infrastructure security, cloud security, platform security. Also, includes the security of the applications themselves and maybe the more SaaS custom code and those are all under that product security office. Is that –
[00:04:31] Nick Vinson: Yeah. Yeah, that's correct.
[00:04:33] Guy Podjarny: I’m curious. Does product security and enterprise security, is that the other angle to it? Do they report to the same person?
[00:04:40] Nick Vinson: The senior vice president is responsible for what we call a tech assurance division. That also includes UE. Yeah, within enterprise security, that's all the business supporting functions, so things like Office 365 and all the HR functions and that sort of thing. The enterprise security department has its own director, vice president and same with product security and also secops.
[00:05:04] Guy Podjarny: Got it. Okay, so ops is still called out a bit differently. You've got secops, you've got product security, you've got enterprise security and then you've got quality assurance, I guess, and a QA still.
[00:05:14] Nick Vinson: Yeah, they're still split out as vertical functions, but there's lots of cross collaboration between the different pillars.
[00:05:21] Guy Podjarny: Yeah, it's an interesting model. I’m a big fan of that initial core split that you mentioned, between product security and enterprise security. I find that to be a healthier division than necessarily infrastructure and code in today's world.
So Nick, within this team you are this DevSecOps lead, What is the primary motion, or effort that you and maybe your peers are aiming for? What are the key responsibilities and goals?
[00:05:48] Nick Vinson: Our mission statement is to ensure our products are secure by design and we're providing the teams with the tools and the knowledge they need to be able to become self-sufficient for their own security, in a nutshell.
[00:06:00] Guy Podjarny: Yeah, it's a good approach. It's about making the default to be secure on it. I like the core mission statement. Let's talk journey a bit. I know you've been talking and investing and implementing a fair bit around rallying the org and changing culture to have more of a security mindset. Tell us maybe a little bit of that journey. How did that happen, maybe a little bit within Pearson, but just what were the steps in mobilizing the org and what was the state at the beginning and how did it evolve to where it is today?
[00:06:33] Nick Vinson: Yeah. In terms of the journey, I think, well, to get going, there was executive buy-in that they needed to care about security and it was going to be something they're going to invest resources into solving. The challenge was knowing how to do that. That was what me and my team were brought in to do. We set about that by defining an engagement model, which was the set of processes we were going to go through with the teams, with the aim of training them up and allowing to become self-sufficient for their own security.
Part of that journey was initially partnering with the first set of business-critical teams, which have been identified to investors resource and because their products were going to be direct to consumer and the trust with the users and their security was paramount. We partnered with those teams and embedded into those teams as our engineers, as fully contributing members of those teams. We carried out what our processes were, which was a threat modeling to identify what the security risks and vulnerabilities were and then identifying what security recalls we require to mitigate those risks.
In the meantime, implementing our automated security testing capabilities into their development life cycle and ensuring that the teams knew what to do with those, rather than just box ticking to say, “Yeah. We're doing static analysis.” We wanted to make sure that we're getting that feedback as quickly as possible to developers and that it had enough context for them to understand what they needed to do with it.
[00:08:04] Guy Podjarny: What did that embedding look like? On one hand, you're talking about having the teams — providing a service to them, running it, using words like, getting it to the teams. On the other side, you're talking about embedding in the team. What do you mean when you say embed into the team?
[00:08:18] Nick Vinson: Yeah. It would be as if they were having a new team member, a new developer, a new ops engineer. You go through all of their onboarding processes. As a security engineer, you're still able to make an application change, test that in the dev environment and then push that through precisely the CICD pipeline and get it all the way to production. You would be a fully contributing member of that team. Your responsibility there would just be from the security side of things, helping implement some of those identified security controls and act as that subject matter expert for the rest of the team to just ask questions of for example, say they wanted to ask for some advice about a particular issue, they'd go to you as the SME and you'd be able to add something, which would then feed back into an AC for their user story.
[00:09:05] Guy Podjarny: Is this assignment temporary? Is it permanent? Is this person basically now assigned to the team and that's their manager? How do you think about when does the embedding end?
[00:09:14] Nick Vinson: It really depends on the team, to be honest, and how they're resourced. There's some projects where yes, until that changes there is a permanent member, or members of our security engineering team embedded within them. Others, especially the smaller teams, it wouldn't be an efficient use of resources, so we have an engineer acting as a liaison between multiple teams. They might not be attending all of the stand-ups and planning sessions, but they'll be going to them when they'd be needed.
[00:09:39] Guy Podjarny: What's the rough ratio that you aim for, or that you eventually landed in, when you think about small team, large teams, how many developers around one of these embedded security engineers, ballpark?
[00:09:52] Nick Vinson: To be honest with you, the embedded engineers, there's no hard or fast number. We manage this with the Kanban board and trying to make sure that we're just limiting the work in progress of each engineer. The real strategy for deploying this at scale is by the security champions program we have, where we have a nominated developer in each team who's responsible for the security. Then we train them up. That's our main function to us as a security engineering team is the training, the knowledge transfer, to the security champion, who can then do that throughout the rest of their team.
[00:10:23] Guy Podjarny: Got it. I love the approach. It's almost like a bootstrapping approach, instead of just finding and warming up, or getting up to speed, a security champion within the team and building up their security knowledge, you're accelerating that process, I guess, by embedding a security engineer. In parallel for future autonomy, you're building up security champions who can reduce the dependence, or the need on that security engineer. Is that the right way to think about it?
[00:10:46] Nick Vinson: Yeah, it's pretty accurate. I mean, in terms of the resource embedding as well, that would really just the necessity of the importance of the project and the time constraints upon it. By being able to put people already up to speed with these things directly into those projects, just able to achieve the same thing quicker.
[00:11:02] Guy Podjarny: Got it. This embedding sounds like a powerful way to get the DevSecOps motion, or maybe get more security embedded. What are some other steps needed as you're spreading security through the org and the dev team?
[00:11:16] Nick Vinson: I think, one of the biggest challenges we've got is both key stakeholders in the tech org and the developers, really appreciating the importance and value of security. I think in an ideal world, there'd already be that understanding. As there isn't, we've had to try and improve that. We try to do that by improving understanding and empathy by educating. By trying to carry out training exercises, doing offensive security attack demos. Everyone involved really can get an understanding of why we're identifying specific security risks, why we're assigning a specific severity to them and how we're trying to prioritize the mitigation work.
I think improving the understanding of the importance of security is really key, because a developer isn't really going to have much motivation to do something if they don't see the value in it. That's the same for a product owner, or a dev manager, or even a software engineering director. If they don't empathize with the value of security, they don't really have any impetus to prioritize out of what they would consider their feature work.
[00:12:17] Guy Podjarny: Yeah. I fully relate to that. Are there specific – you threw out a few names of techniques that you've used to improve security. Do you have a favorite? Which technique proved to be the most effective in your experience in raising that type of empathy or interest to security amidst developers?
[00:12:34] Nick Vinson: I think, really, you've got to tie everything together for it to be most effective. You can chuck a secure coding practices, doc OS, 36-page document at teams, but that's not really going to have much of an impact. I think what you need to do is put it into a bite-sized chunks and be able to have regular sessions, where you're going over a particular thing. Just an example it could be say, you have a session focusing on protecting XSS. Or just have some sessions, where you're talking about why if you're using the JavaScript framework, which has got some good secure defaults, why it's not a good idea to go off track from that and start doing things like creating your own cryptographic functions, or things like that.
You focus on a specific area. Then like I said, as we're identifying what these security controls are, we're populating those as Jira stories from our threat model. We want to put as much information and context into those stories as possible. We'd link to our training material and also link to any attack demos we have, so they get it from both sides. They can see the actual training material and a demo of it in action really.
[00:13:38] Guy Podjarny: That's a good investment. It's almost, I’m tempted to wonder whether at some point you can publish some of those elements for others to even consume. When developers get offered all this great content that you've created, how have you seen the pickup by developers? There's oftentimes, a lot of cynicism around yes, you can create the great insights and views, but developers won't care. They will just seek for the minimum, versus the maximum understanding of the issue. What has your experience been?
[00:14:09] Nick Vinson: I think, before we tried to link it all together, it was more challenging to get the enthusiasm because it was just seen as us creating a backlog of security items. Although we tried to make that as easy as possible, in terms of having them all already prioritized enough context and clear ACs in there, we would also provide assistance via our security engineers. It still felt like we're injecting a body of work and the enthusiasm to implement that wasn't that high.
Since we've been delivering regular training and attack demos to our security champions program, we've seen the enthusiasm for that increase dramatically and with the willingness of developers to nominate themselves to be security champions too. I think in particular, the offensive security attack demo has gone down really well.
[00:14:57] Guy Podjarny: Got it. Yeah, it makes sense. It's about having that visceral feeling of what it means to not do it correctly. I want to drill a little bit into this security champions program. It's definitely a recurring theme that's grown adoption. I’m a fan. What are some principles? You've already alluded to a few. They're using them it seems as a destination, or a group that can get these types of trainings, or sessions. What are some other descriptions of the program that you have and how does it manifest? What does it mean to be a security champion?
[00:15:29] Guy Podjarny: The first thing really, is the baseline requirement is that it needs to be a contributing member of that development team. It's not just something you can bring someone external and just put them in there. They'd initially need time to onboard as a new developer. Ideally, it's already someone, an existing developer on that team first off.
Otherwise, we have a regular cadence of team meetings, where the SAEs can get together, discuss any particular issues they have and like I said, have a regular forum where we can do random ad hoc demos and also encourage anyone who has anything, whether it's an SAE or one of my security engineering team to go and demo anything they feel is interesting. It's definitely something we try and foster and encourage as much as possible, which is just any collaboration because that just helps organically raise knowledge across the board.
[00:16:19] Guy Podjarny: Very cool. Well, I think at the end of the day, it sounds like this recurring theme of just involvement. You talk about involvement of security, on one hand, into development, making them contributing teams and then you take those people that are already contributing members of the team and you make them security champions. That's basically trying to get people coming from opposite starting points maybe, but to the same destination of a combination of being a contributing member and — of the dev team and having security mindset and security awareness.
[00:16:47] Nick Vinson: Also, the more security champions we have, that's the more developer feedback we're getting. That's helping improving what we're doing in terms of the security capabilities and testing functions that we're providing. The examples of that have been streamlining the onboarding process, so that that can just be the Microsoft form. Also, identifying that if there's any particular issues with any tooling or the way it's being used, we can just take that back and keep incrementally improving it.
[00:17:15] Guy Podjarny: Just one more question maybe on the security champions. Do people that get defined as security champions, are their managers aware of it? Is there an expectation that some X percent of their time will be spent more on security? Or is it just a competency?
[00:17:29] Nick Vinson: It's quite flexible, to be honest. It will depend on each team. Yes, normally the expectation is that the majority of their time will be focused on security. There's no hard and fast rule there, to be honest. It can often be contextual on the size and scope of the project too.
[00:17:44] Guy Podjarny: Got it. I’d like to maybe move up to the start. You started this story, saying that you already had executive buy-in to get going and really, the focus was how do you do it, not if you do it. How significant was this? Can you tell us a little bit about what it meant to have executive buy-in? Who was bought in? How did that manifest and was that even terribly significant, or is it really all about the trenches?
[00:18:10] Nick Vinson: In terms of executive buy-in, it came from the CISO, who had the CTOs ear. Consequently, it was really from the top-down in the technology org. Right from the top, there was the understanding and the need for security. I think in practical terms, what does that mean in terms of implementation? It's tricky because really, you're dependent upon the software engineering org and the leaders there. As you're getting closer to the development and also buying into that value as well, which is not necessarily an automatic filtering down.
I mean, basically, it enables us to be able to do what we need to do in terms of setting up this function, but our ability to deploy across the teams is also dependent on the collaboration with a lot of the other tech leaders.
[00:18:58] Guy Podjarny: It's always a combination of the two, right? The mandate to have these conversations with the managers, if the practitioners don't care, then nothing will actually happen. I think, thanks for the story on how changing cultures and processes in org. Maybe we shift gears a little bit and we spend a few minutes in the technical realm. You go through this process, you're adding technology bits to the pipeline. Give us a bit of a picture here. What do you think are the key types of tests that you do within the pipeline today? Maybe we go from there a little bit to what are some of the bigger challenges with indeed, what's already in place?
[00:19:35] Nick Vinson: Yeah. I mean, obviously it will have some degree of influence depending on what the tech stack of the team is, whether we're using container scanning or not. Across the board mostly, you'll be doing some form of a secret scanning, some form of static analysis of the actual code base dependency, vulnerability scanning, then yeah, like I was saying, depending if the container is the build artifact running container vulnerability scanning. Then down through to DAS running in the test environment.
Again, sometimes on each environment, we might be putting a rasp agent in there. Yeah, so that generally for the application build and deployment. For the infrastructure, we'll also be running some static analysis on the infrastructures code. Potentially, there might be smoke tests for the environment provisioning which might include some security. Again, it would probably depend on the team.
[00:20:24] Guy Podjarny: Some of these, the choice is pretty obvious. If you're using containers, you need container scanning. If not, you don't. Some of the others, like rasp for instance, are more of a choice. I mean, can you share some of the – what drives the decision to yes or not use, for instance, rasp?
[00:20:40] Nick Vinson: There's a couple of reasons for rasp. Actually, wildly different depending on the environment. One potential use of rasp is say you've got a legacy environment and a legacy app, which is difficult for you to patch, because there might not be a software team, development team there anymore. That's when rasp can potentially help you. The way that we predominantly use it with our projects is we find it can be a more – because it's got instrumentation built into the actual application, it's much likely to be better accuracy in terms of potentially blocking necessary functionality.
Sometimes, we treat it as almost improved waff, but we'll still have waff in there at the front-end, but it just means that we can be less granular and on the front-end and not risk-blocking actual required functionality.
[00:21:27] Guy Podjarny: Rasp indeed has been the promise of a better waff for a good while. I guess, there's always been some promise to it and some challenges involved.
[00:21:35] Nick Vinson: There's definitely still challenges. I mean, to be honest, there's a lot of security tooling in general — I think, still has a long way to go in terms of maturity. Especially, as lots of big companies now with lots of resources and stability and performance, still leaving quite a lot to be desired in some cases.
[00:21:52] Guy Podjarny: Yeah. What are some examples of challenges? I’m sure this work never ends and you're dealing with many different variants of it. Give us an example of maybe a top of mind challenge that you think is more, we need to figure out a better way to handle it.
[00:22:08] Nick Vinson: I think, a common one is the triage and review of the scan results and being able to understand what to do with that. Often, that was a traditional failing of static analysis, where you'd require a very knowledgeable experienced application security engineer to actually go through and determine what was a false positive, what was genuinely exploitable, what wasn't. Then, turn that into a body of work for the developers to address. That's an inefficient way of doing it. You really want the results to be as easily consumable and as accurate as possible, which is something we've been doing as we've been looking at our SaaS tooling and migrating to different offerings, which are more developer friendly.
[00:22:50] Guy Podjarny: Yeah. I mean, definitely, this notion of separating the signal from the noise, I think, has been one of the key bains of existence for SAST, especially maybe alongside performance has been maybe the top ask.
Maybe flipping it around a little bit, what's your favorite trick, or learning that you have in terms of actual application, something that now that you know what you think everybody should be doing and how they apply tools, how they choose tools?
[00:23:15] Nick Vinson: I think, how they choose tools really, it just requires a good quality evaluation, where you were able to just get a good means of comparison and actually tested it. Because ultimately, until you've actually implemented and got some data, you really don't know what the potential pitfalls might be. I think, the general best tip you could have is keep it simple. Adding complexity always adds problems. Just looking to simplify wherever possible is the best bet for security.
[00:23:42] Guy Podjarny: Yeah, that's a strong principle. I think we're going to try to squeeze in two questions, if we manage to fit them here in time. The first is on maybe the recognition and incentive. We talked a lot about building empathy. We talked about the tooling and empowering developers to do it. I’m curious, what do you do for celebrating success? Or, if somebody on the team is investing, they're going above and beyond, I mean, what have you seen work best in terms of helping them give them the right positive reinforcement to keep doing that?
[00:24:15] Nick Vinson: I think, the thing which has been appreciated the most is and especially for job satisfaction is developers, engineers feeling that they're making meaningful contributions and that their decisions have an impact. I think, giving ownership for a particular area, which someone's working in and knowledgeable in is just from the feedback from that has been something which has been a real positive. I mean, we're always looking to make sure people receive the credit and recognition for the good work they've been doing. Yeah, I mean, in our weekly team meetings we'll often have demos and we'll be discussing and get people to talk about the stuff that they're really pleased with and happy with.
[00:24:50] Guy Podjarny: I love that answer. It basically comes back to maybe that simple, which is you own it and you get satisfaction from showing that you've actually produced value and created it, just like you do in any other field. We don't incentivize building a piece of functionality, you just know you own it. You've built something good and you've created value. I love that answer. It doesn't have a lot of fanfare to it and it's just concrete.
[00:25:14] Nick Vinson: I think something else, which is also just really appreciated is just clearly defining what it is we're asking of our engineers. I think, that was something which I’ve been frustrated in the past with managers, which you deliver something, but it wasn't clearly defined. You get, I was expecting this. Yeah, always making sure you've got clear ACs and requirements before assigning any tasks.
[00:25:36] Guy Podjarny: Yeah, super great advice. Nick, I’d like to start a new tradition with you as we go through it. I’d like to change my closing question that I’ve been asking in these different episodes and ask you if you think ahead five years out and you think, you imagine someone doing the job that you're doing today in five years, what do you think would be most different about that future person's job, compared to what you're doing today?
[00:26:02] Nick Vinson: I think, or I hope at least, there'll be actually less need for dedicated security engineers. It'll be something that developers take on a lot of more of the responsibility themselves and it becomes more of their day-to-day activities. Similar to the way that it was for QA with developers writing their own tests and TDD and also from ops activities, where developers are responsible for building and maintaining their own CICD pipelines. I hope that's going to be a similar case for security activities. Developers and teams are going to be much more able to do their own threat modeling and their own security testing and being able to easily interpret those results and carry out those security processes to create merge requests for security updates and carry a lot of those activities themselves.
[00:26:49] Guy Podjarny: Yeah. Well, that's awesome. I’d love to agree and see that. I do think that's the trajectory. What would you think then the person wearing your hat, assuming does that role go away, or what would that future clone of yours be doing then?
[00:27:03] Nick Vinson: I think, it might look similar to how it is with QA, where you'll still have a lead, you'll still have someone who runs that department. Instead of having manual testers integrated into every team, you'll just have a QA automation lead who's coordinating activities, making sure that's working within a good framework. I think that'd be similar to this for security and I still think you'll have security teams, but I think the security engineers will be able to spend their time researching and also facilitating training. I think, just less hands-on day-to-day just specifically in projects to be making security changes.
[00:27:39] Guy Podjarny: Yeah, perfect. Well, Nick. Thanks a lot. Thanks for coming onto the show, for sharing this great insight about the journey, the learnings for it. Thanks for coming on. Thanks, everybody, for tuning in. I hope you join us for the next one.
[00:27:53] Nick Vinson: Thanks for having me, Guy.
[END OF INTERVIEW]
[00:27:58] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode.
Bye for now.
[END]
Up next
Episode 87
Security In Small And Big Organizations - The Hyphen Between Security And Dev With Amanda Honea-Frias
View episodeEpisode 89
Containers And Developer Experience In The Cloud Native World With Justin Cormack
View episode