Skip to main content
Episode 95

Season 6, Episode 95

Security In Public Service With Robert Wood

Guests:
Robert Wood
Listen on Apple PodcastsListen on Spotify Podcasts

How do you protect sensitive healthcare information for millions of people while at the same time keeping up with fast-paced development demands? On today’s episode of The Secure Developer, we speak with Robert Wood who has been grappling with this question over the past year. Robert has an established career in the private cybersecurity sector having worked for a range of startups of varying sizes, from teams as small as six to numbering well over a hundred people. He has since been driven to public service and for the past six months, he has been working at The Center for Medicare & Medicaid (CMS) as their chief information security officer.

In our discussion, we look at the intersection between government and security to interrogate how to make modern security approaches thrive in an environment that poses unique challenges but essentially functions from a place of integrity and good intentions. Robert shares how he’s had to adjust to working at a government agency after his history working in startups, like becoming accustomed to the decentralized goals in government versus the singular focus of product development at a startup. He explains how risk aversion can cause stagnation, which in turn causes its own vulnerabilities and risks, and how he would like to see this issue addressed in the future. Tuning in you’ll hear why Robert is a big proponent of the security champions model and how CMS has been able to utilize an information system security officer. Join us today for a fascinating peek behind the curtain of how CMS is run and how it has the potential to innovate!

Compartilhar

ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community. Join us June 23rd for DevSecCon 24. It’s a free global vendor neutral community driven conference that connects developers, security and operations teams to learn and enable integration of security into their development practices. Find out more at devseccon.com.

This podcast is sponsored by Snyk. Snyk is a dev-first security company, helping companies fix vulnerabilities in their open source components and containers, without slowing down development. To learn more, visit snyk.io.

On today’s episode, Guy Podjarny, founder of Snyk talks to Robert Wood. Coming from an established career in the private cyber security sector across various startups, Robert felt a calling to public services last year when he joined The Center for Medicare & Medicaid as their chief information security officer. He is working to make security more accessible for all end users, who include more than just security professionals. One way CMS is doing this is by testing the bat cave to make a more unified approach to software development and cyber security.

[INTRODUCTION]

[0:01:46.0] Guy Podjarny: Hello everyone, welcome back to The Secure Developer. Today, we are going to dig into the intersection between government and security, and how do we make agile security or modern security approaches work in a maybe slightly more challenging surroundings but ones with all the good intentions.

To really dig into that, we’ve got with us Rob Wood, who is the chief information security officer for The Centers for Medicare and Medicaid Services. Rob, thanks for coming on to the show.

[0:02:12.9] Robert Wood: Thanks for having me.

[0:02:14.3] Guy Podjarny: Rob, before we dig in, tell us a little bit about yourself and we’re going to dig in to into The Center for Medicare & Medicaid in a bit. What is it that you do and what was your journey into and through the world of security?

[0:02:25.2] Robert Wood: Yeah, right now, I am running the security and privacy compliance organization at CMS or The Center for Medicare & Medicaid Services. That encompasses security privacy compliance of course agency wide. That includes everything from authorizing systems that are going from the development process into production to managing data privacy sharing agreements, to of course just enterprise, security operations or more generally helping foster a culture around security resiliency, to monitoring, to reporting, to other agencies, to the Whitehouse, to DHS to managing public, private information sharing. There’s a lot of stuff that goes into it.

[0:03:09.1] Guy Podjarny: Yeah, sounds like it.

[0:03:10.2] Robert Wood: Prior to all of this, I started my career in a super small boutique consultancy where I think it was probably six people in total, myself included. I started there as an intern and then converted into a full-time employee and it was one of those companies that because it had no HR, no legal, no nothing, there is just a lot of crazy stuff that happened and the owner of the organization, effectively just said yes to anything that was cool and interesting.

We got to work on a lot of stuff that I don’t think I would have had the opportunity to work on in other places, at least not as freely. I got exposure really early on to everything from forensics to full-on full scope red-teaming to some applications security work to a lot of – it was very heavy in network security in that particular company. I mean we did everything from your bread-and-butter social engineering, pen test teams. At one point, we were doing a physical pen test and we delivered a person inside of a pelican box inside a building dressed up in FedEx garb, the whole nine yards and literally shipped inside.

It was a lot of fun. From there, I moved over to Cigital which is now a part of Synopsis, and that was a much bigger, more established focused on application security consultancy and so there, I got a lot more exposure to I guess the trade of AppSec and so everything, again, incorporating pen testing but also static code analysis, threat modeling and what it means to actually build an AppSec program to developer engagement and training to helping CISOs or AppSec directors think about the kind of framework selection they wanted to champion at their organizations, how they built security controls into developer tools, things along those lines.

[0:05:01.4] Guy Podjarny: Yeah, this is a consultancy, not audit services so much, it’s not about finding the issues but rather –

[0:05:05.1] Robert Wood: Yes, exactly.

[0:05:06.9] Guy Podjarny: Working with the leaders to establish programs.

[0:05:08.9] Robert Wood: Exactly. There was a good bit of finding issues but it wasn’t finding issues and audit sense where it would be going to some regulators. I need to do a pen test and so I’m going to engage some firm, I want to make my app, my bank.com app more resilient as I’m going to engage somebody to help.

From there, I was there, started my career at the lower tier of things, just saying yes to everything and so that’s very much been my mantra, just saying yes to things so I could get more exposure and learn more and I left as a principle and have the opportunity to join a startup in the healthcare space called Nuna and they were actually working with CMS, building a data warehouse for the Medicaid program at the time and so I was the first security engineer there and eventually got asked to build out an entire security program so that encompassed both their private sector work, their public sector work and then of course the corporate security posture. Phishing and security monitoring, things like that.

[0:06:06.2] Guy Podjarny: What was the size of their organization? Roughly?

[0:06:11.0] Robert Wood: When I started, I want to say there were around 60 to 70 people, when I left it was probably close to 150.

[0:06:18.2] Guy Podjarny: Okay, just giving us a sense of scale. Yeah.

[0:06:20.5] Robert Wood: Yup, totally. It was interesting in that case because they were supporting such a big program and working with such big enterprises just by virtue of the thing that they were building. It was interesting because I got sort of thrust into the deep end of enterprise audit and third-party risk reviews and of course, the government audit process, which I am now sitting on the other side of.

Then from there, I moved over to a security startup called SourceClear, they were purchased by Veracode during my time there and they were building a software conversation analysis tool and so I got a chance to both run security and be involved in the research part of that product development and that was really interesting for me and then I took one more startup, to having a startup along my journey and then ended up here.

[0:07:08.7] Guy Podjarny: Yeah, quite a journey. You kind of went consulting, you’ve done small and now big, you’ve kind of gone through growth so I guess you’ve got a bit of a variety of experiences to bring to the floor now that you’re in the big government. This is the first time actually working for a – it’s fair to call it a government organization or –

[0:07:30.3] Robert Wood: Yeah, cms.gov, we are feds through and through.

[0:07:35.2] Guy Podjarny: Still the first experience in this type of surrounding.

[0:07:38.8] Robert Wood: Yup.

[0:07:40.4] Guy Podjarny: Let’s dig into this, again, you've sort of seen big and small, I think kind of the topic we were going to unravel a bit is this potential conflict in on how do we overcome it between maybe a natural risk averse surrounding as well as one dealing with very sensitive information, healthcare information and maybe a desire for agility and kind of fast-paced development, how do you keep things secure within a process without bogging everything to a halt.

I think to start exploring this set-up, tell us a bit, you already alluded to a little bit but CMS, what’s the rough structure you work with from a security lens and especially talking about a product organization or sort of engineering organizations you might be supporting.

[0:08:25.9] Robert Wood: Yeah, the organization layout at CMS is unlike anything I am personally used to so it’s a series of centers as in the title. You have all of these sort of standalone, almost business unit like components around the agency and they’re running various programs so you’ve got a center for Medicare, a center for Medicaid. There’s another part of the organization that handles enterprise operations and IT, things like that. Another part might handle all the finances and another part might handle the childhood insurance program.

[0:09:00.5] Guy Podjarny: These are dozens of them? Hundreds of them? What’s the –

[0:09:03.5] Robert Wood: Dozens of them. It’s a decentralized agency that all roll up into of course, the individual who is nominated to run the agency. We sit inside of the office of information technology, my group. While we sit there, organizationally speaking, we’re responsible for security across the agency and in some cases, some of these teams have their own security teams themselves.

We would coordinate with them and help them in whatever ways they need, either tooling or support or guidance or information sharing, whatever it is. Some teams or some components, for product teams have no security theme, have no IT team themselves. They rely on these centralized offices of information technology and my group to help them with that particular part of their day-to-day work.

Some of these parts of these parts of the agency are very operational, others are effectively product development organizations in and of themselves, which is really interesting. I think the thing that I’ve been adjusting to mostly is in the startup world, you’re at a company and everyone’s focused at that company is towards building this one product.

You're all sort of centered on the same set of goals. “I want to get product A out in the market, it’s got to have a good user experience, it’s got to solve problems, it’s got to scale, it’s got to be secure” all of these things. And so here, everyone is focused on the mission of CMS but the way in which the means to the end is so different for all of these different programs and they all have different goals, they all have different levels of technical maturity, they all have different levels of resourcing and so there’s a lot of big enterprise dynamics at play.

[0:10:47.9] Guy Podjarny: Yeah, I can imagine different groups. You provide services, I guess your interaction with these, I guess, effectively business units, right? These different centers. How do you interact with them? You offer services and they’re internally charged, how do you work with them?

[0:11:05.3] Robert Wood: Yeah, it’s a mix of – there’s a lot of services. Everything from penetration testing, to the actual – what’s referred to as the ATO or authority to operate process and that’s the final review against the nest 853 framework that happens in all government agencies before a system goes out the door and it’s effectively promoted into production.

We have a division of strategic information that’s handling more national security related matters, they’re doing things around the supply chain risk management reviews for providers around the agency and then we also do a lot of things internally. We’re building more – I guess you could call them process or products but they’re more a mix of tooling and process together and that is serving more aggregating information, vulnerability scan information, risk insights, things like that together and either reporting back out, providing risk reports back out to business owners or system owners. We’re providing reports up to DHS or HHS who is our parent agency, things along those lines.

[0:12:12.7] Guy Podjarny: How’s the accountability split going on there? You’re providing all these different services, is every center in charge of their own security and you’re just there to help or are you actually kind of in charge of that, I guess?

[0:12:25.2] Robert Wood: Yeah, that’s a really interesting dynamic. There are multiple prevailing opinions on that front. There are some prevailing opinions that believe that the CISO and the CIO sort of own all agency risk or they have the means of signing off and accepting on any agency risk pertinent to technology. Which in a way, makes sense because we are what’s referred to as the authorizing officials for these ATO packages, but if you think about it, people engage in risk, I’ll call it risky behavior every day when you're interacting with your email, there could be risk. When you’re signing up for a new SaaS tool, there could be risk, when you’re writing code, there could be risk and so, everyone is kind of – in every decision that one might make has the potential to introduce or manage risk in some way, shape or form.

So risk is much more decentralized than it is in the big formal ATO process. There’s a sort of cultural dynamic of risk being this sort of formal thing that needs - only certain things rise to the level of getting sign off and tracking and basically coming into my office and being signed by myself and the CIO. There’s an interesting dynamic around what is risk and it’s typically viewed as a very negative thing in at least this environment and from what I understand, other government agencies, which I think is very justifiable, given we’re in the business of government, we do lots of important things on behalf of the American people and CMS for example has the kind of things we’re responsible for, we have claims data on almost half the American population, which is a huge amount of people over a hundred million people and it’s –

There’s big responsibility and so the risk aversion I think is justifiable but the interesting thing about that is we’ve almost created an environment that because we are so risk averse, we’re not changing and because we’re not changing, we’re actually taking on more risk, if that makes sense.

[0:14:25.7] Guy Podjarny: Yeah, I think it actually makes a lot of sense, you can’t respond to sort of things quickly enough. I know you’ve only been there for very short while, right?

[0:14:33.8] Robert Wood: Six months now, yeah.

[0:14:36.1] Guy Podjarny: Maybe let’s talk about things a little bit more and sort of future stance, you know, versus necessarily present. Coming along now, you’re seeing this risk aversion, how do you approach this challenge and either what changes you’re enacting or ones you’re sort of thinking of, even before that. What do you think is the right way to approach the balance of this justifiable risk and this desire to not freeze?

[0:14:58.2] Robert Wood: Yeah, well I think part of it means that you need to have a security process that is not so painful, that is not painful to the point where you don’t want to change because you have to deal with the security process. For example, if I wanted to shift to a container based deployment strategy, some might constitute that as big major architectural change and so in some organizations, security might kind of swoop in, black helicopter style and want to pick everything apart and want to talk to everyone, they want to look at code, they want to test things, they want to scan things.

From a developer standpoint, I can understand that sounds painful, I don’t actually, even though in this hypothetical circumstance, moving to a container-based deployment strategy might be the optimal technical solution from what I’m building but having to deal with all of that sounds really painful and so I don’t want to deal with it and so I’m just not going to do it. I’m going to avoid improving because of the pain that’s going to follow.

I think the first big part is lowering the burden to the point where you’re maybe either measuring and performancing or publishing metrics around people’s ability to change or react to security or performance telemetry, things along those lines and so you're effectively trying to invert the decision calculus that people are going through.

[0:16:21.9] Guy Podjarny: What are some concrete sort of tricks I guess to achieve that, to invert it, what’s the – maybe describe kind of the two scenarios, right? I have an app, I want to use containers, what happens to me in the pre-imposed world?

[0:16:37.3] Robert Wood: Yeah, a couple of things that at least come to mind for me are, we need to find ways to cut down on security process just like I’ll write and remove it where it no longer makes sense and you know, in the world of change management, we typically focus very heavily on what things we can add, when we think about security as a system. We can always add-add-add-add and everyone who’s ever driven in LA knows that having more highways, adding more capacity to a system does not necessarily make it better.

You can actually decrease the throughput of a system by adding more capacity to it, which sounds counterintuitive but it’s been proven to stand time and time again. I think security teams looking introspectively at the things that they don’t need to be doing that are not actively moving the needle and cutting away those things or consolidating them, making them simpler, whatever that looks like, I think is a really important first step.

Another big step is trying to find a way to force more accountability and ownership over risk. What I mean by that is if I’m working on a new product and I want to get it up the door, my mindset is that somebody else is basically going to have the egg on their face if this goes wrong, not me, not my team.

Then I may be a little bit more flippant. But, if I’m taking a more captive ownership over, the fact that like, “Hey, I want to move from traditional servers to containers and have everything bundled and running Kubernetes” or whatever my shiny new idea is for improving my product. I’ve done the research, I can lay it all out, I have data to back up the fact that this is a well-reasoned, well-thought out plan and I put it out there.

I’m a big proponent of making sure that decisions are localized to where they’re going to be most effectively made. Who is going to have the best data to make the decision and effectively, you're almost transferring the risk down to the individuals or the teams that are going to be best suited to make the decision.

That I think has to come from some level of leadership where they are comfortable decentralizing or delegating authority to accept risk or take more chances and embrace opportunity as risk, opportunity being the inverse of risk. It’s a combination, you almost need both sides to kind of meet in the middle of teams being willing to embrace that role and leaders being willing to give up that role and so that is something that we are actively trying to do right now and getting more people to, for example, we have a one of the nest control centers around doing something called a security impact analysis.

Traditionally, security impact analysis has been thought of as doing a lightweight controls assessment. If I want to do “a major change” to a system, one might have to go through basically do a mini-audit every time I want to do a major change and if you are thinking about that in the context of a continuous deployment operating model, where you are trying to deploy even once a day and doing a mini controls assessment every single time that you are making a change and making sure that is all documented and signed off on is painful.

Ideally, if I had already combines these two concepts as it applies to the SIA - the security impact analysis - then it is worth thinking about is that actually moving the needle on risk. Can we boil that down to what the essence of it really is? Whether it’s doing a very quick time box threat model on what the change is going to be, making sure that you are de-risking the change through good unit test coverage and maybe security linting or software composition analysis or you have a certain set of security activities that are adequately de-risking changes you’re making to your product.

Then separately, I don’t need to look at your SIA, you know being the CISO. I have no idea what it is you’re changing. I am in no way equipped to sign off on or make a good decision about the change that you’re making to your product and so you, being the product owners and developers and maybe a security person who’s working with you, you all should be making the decision on whether this change your making or proposing to make is the right move.

You should have the backing, the resources, the data to support you making that decision and then I am just going to back you up.

[0:21:00.6] Guy Podjarny: Yeah, I think that makes a lot of sense and you are right to say that it requires both sides, so this is the central entity, your group in this context relinquishing some control but you can’t just sort of drop it to the ground. Somebody else needs to sort of pick that up, so the centers or the business units picking that up and saying, “Okay, the ball is in our court” but then they can choose whether they actually apply that process or do they apply some reduced variant of it.

Who within the centers? You mentioned, 'or a security person', I mean in your view of how this gets rolled out, does it rely on local security experts residing within the groups to achieve this? Is this more of a security champion style or is it people with an actual security title? How do you see the expertise required to be able to take on such responsibility?

[0:21:49.6] Robert Wood: Yeah, I am a big proponent of the security champion’s model. Right now, there is a role referred to as an ISSO - information system security officer. It is a pretty standard thing across the federal space and it’s effectively the individual within a product team that has – that’s been named to have security responsibilities for product area or a system can kind of get used pretty broadly and sometimes a data center environment could be classified as a system or AWS account might be classified as a system.

A SaaS account might be classified as a system or product, and in some cases here, those ISSOs they are bread and butter security people. They know their stuff in and out, if you want to talk threat modeling with them you can, if you want to talk static analysis with them you can. They can interpret pen test findings, they can roll up their sleeves but in some cases, those roles are almost like other duties, an assigned assignment for an individual and in those cases, you know it might be a nurse or it might be an admin of sorts and they may not have any security experience whatsoever.

In those cases, we need to rely on other individuals with technical expertise close to the product or support them with technical resources of our own, either through contractor support or a team and effectively get them the help that they need because if they’re operating effectively flying blind without any kind of security expertise then that makes their job way harder and understandably, they’re going to be nervous about making changes or effectively putting their name on a change that says, “I’m okay with this” being named ISSO even though they may not understand anything that is going into it.

They may feel like they’re putting their neck on the line for something that they don’t understand, which I deeply empathize with and so I think in the ideal case, you have a combination of the ISSO being the more like a product security architect and then you have more of a security champion relationship with engineers who actually resident on there to fund the product teams. Those are actually contributing code and can be an advocate for security.

You have a tight relationship between the ISSO and a champion inside of a product team and when and where it makes sense coordinating back with the central, you know, with the mother ship so to speak.

[0:24:10.7] Guy Podjarny: To get support, yeah.

[0:24:11.8] Robert Wood: Receiving new sort of guidance or support or funding or tooling or you know, things like that.

[0:24:16.8] Guy Podjarny: Yeah, I should really like this concept. I haven’t heard of an information system security officer, this ISSO, but I like it. The thought that comes to mind is this GM model for instance, which in most organizations, a GM is a very big thing. You know, you’re a general manager of something big but for example in the AWS model, it can be a very small thing.

You can be a GM of something very small and you are a seniority I guess or demonstrated competence in the role as it evolves and the scope of your responsibility, but this is a very clear ownership even it’s not a clear skillset to say this is your responsibility to be the security officer of whatever system that is, big or small, and then I guess the rest of the organization needs to be set up to help you actually accomplish this successfully but it is very clear ownership, which has a lot of appeal to it and kind of a flavor of actually very modern methodologies or you know, if that was multiplied within the centers and into sort of smaller and smaller units, then those units actually can be quite empowered in this model.

[0:25:19.1] Robert Wood: Right and typically or historically, you know that role has effectively been almost like the compliance officer for any given system, but in some of our more mature centers or technically mature products where the ISSO is maybe has more security experience in the product teams, there are now continuous deployment sort of operating model. They’re really like truly building product. Those teams, you really see this, I am going to use a cheesy corporate buzz word here, but you really see this cool synergy between development and operations and security.

Not to drop a buzz word on it but it really is like the embodiment of DevSecOps in a way where you’ve got everyone in that, that product bubble taking ownership over the quality of the thing that they’re bringing into the world and then we’re just kind of engaging with them and sort of feeding them new stuff as it comes out or listening to them as they are dealing with their own sort of challenges, so we have a couple of outlets for collecting information from the ISSO community and just the security community at CMS at large.

Trying to be really introspective and listen to what it is that’s bothering people, what’s getting in their way, what’s not working for them, things like that so we can find out what things we need to streamline, what things we need to cut away, what things we need to add, things like that.

[0:26:42.7] Guy Podjarny: Yeah, I think it is an interesting concept. You know you look at high accountability surroundings, hopefully we can control the amount of blame that that sort of might naturally sort of lead with it but still high accountability surroundings and it’s interesting to think about that model, but it is a bit of an intermediate model or entity between the security champion, the potential BISO, which is by definition quite large. It could have been a business unit or maybe something a bit smaller.

[0:27:09.9] Robert Wood: Yeah.

[0:27:10.3] Guy Podjarny: How do you find your allies in this convex? You gave an example of [inaudible 0:27:13.2] but as you embark on a change in the organization to apply more local accountability, to relinquish, how do you find who to work with? Do you try to instill this broadly sweeping or do you find pilots and you practice there? What’s your approach?

[0:27:31.3] Robert Wood: I think this is very much a matter of personal preference. My approach is very much centered on a combination of the last two approaches that you brought up. Experimentation with different ideas. By no stretch do I think I have all the good ideas, and I don’t think anyone has all the good ideas, and so we try to create a culture of openness such that people can bring forward ideas. We can collectively sort out which of those might be, might stand out as good candidates to run and then try them out.

You know, try them out in a very localized way, collect data, see what works, if it doesn’t work, that’s okay. But then separately, having worked in product companies myself during my time in startups and this was a big mindset change for me. When I was consulting, you kind of have this mindset of, “I’m just going to come in and find things.” It is going to get shipped over to somebody else and it’s their problem.

You can almost recommend anything as a consultant and I’ve observed this now being out of the consulting world where –

[0:28:32.8] Guy Podjarny: No accountability. It comes back to that.

[0:28:34.6] Robert Wood: Yeah, there’s no accountability. You know, you don’t own that pain and suffer it like if you were to recommend to change the parent site scope of your session cookie on some app and the consultant may not know that that particular setting is tied to some big enterprise architecture thing and you’re basically putting people in this rock and a hard place where they have to then, they’re left with this report and they’re then sitting there trying to argue with other people across the organization to make it happen.

When I came into my first product company role and I really had to kind of change the way that I thought such that if I am going to make a recommendation, I’d better be willing to roll up my sleeves and help make it happen alongside everybody else and so bringing that into CMS, you know, to round up the question, so I really tried to – like anybody who is willing to step up and willing to share their thoughts and share their feelings and wants to make security better, I seek them out as allies and making this happen.

The whole cliché of securities is everyone’s job, is I think true to an extent but it’s security culture or culture in general is definitely everybody’s job and anyone who wants to be a positive contributor to helping make that better like I carve out the time to listen and work with them and hear their suggestions and offer my own back and basically collaborate with them at a localized level and then bring it back into the office and the reach that we have in our group to try to take those recommendations and basically funnel them into these experiments that we might run across the agency.

It’s a combination of just having a good ground game, relationship building and listening and experimentation to figure out what’s going to work.

[0:30:21.5] Guy Podjarny: Yeah, so maybe let us take a moment to talk about the other side. The compliance or sort of the auditors or even sometimes it’s yourself over the sort of the authority to operate, how do you accept or how do you get when you talk to other auditors and such to kind of wrap their heads around and kind of bless a surrounding in which you reduce controls and you’ve given people who might not have fully practiced it, fully documented it or quite as mature as handling it, the responsibility for it.

[0:30:50.0] Robert Wood: That is a nut that we have not yet cracked admittedly and my hypothesis at the moment is historically, everything is kind of centered around controls assessments and so if you have gaps in your controls that is a bad thing and ergo, you have risks and ergo, it might not make sense to give you an ATTO and sort of let your product out into the world to do what it is meant to do.

My hypothesis is that we need to instead, instead of focusing so much on controls because controls cover everything from session time outs, to paperwork, to the way you manage personnel, you know anyone who’s been through any kind of SOC 2 audit or ISO 27000 audit or you know, PCI, HIPA, you know that control’s span the whole organization. So not all controls are the same. Encrypting your data at risk may not be the same as having antivirus involved and installed.

What if my organization is running on Chrome Books? Does antivirus make sense? Probably not. If I have chosen to adopt a continuous authentication like password-less biometric authentication scheme, you know, where I am using face ID or UB keys or something like that, then having strong passwords, like that control doesn’t make sense. I think instead, we need to shift towards this position of measuring and reporting on positive security activities with the understanding that positive security activities result in resilient security outcomes.

It might not just be security activities. It might be the development culture, it might be good change management culture, things along those lines they have – they kind of intersect with security in useful ways and sort of blending the – what controls are actually most important and is your team exhibiting positive security activities such that the more of those you do, we know like you know if you’re doing software competition analysis, you are doing static analysis, you run pen tests every so often, you engage with your developers.

You have developer led threat modeling, you have a good change management process, you’re practicing infrastructure as code. Like wow, your product is probably not going to be all that bad versus somebody who might have all of their controls squared away but it is still manually changing things in production. They’ve got humans in production accounts messing around with configuration and they’re racking and stacking more servers to scale things and that latter system probably has a lot more risk and so I would like to change where we are authorizing based on risk such that risk is derived from some measure of what controls are present that matter most and what security activities you are doing such that we can infer positive outcomes if that makes sense.

[0:33:41.9] Guy Podjarny: It makes a lot of sense and I like the approach. I guess this, is it now work in progress to get...

[0:33:47.7] Robert Wood: Total work in progress.

[0:33:48.7] Guy Podjarny: ...auditors on board then to sort of accept it, yeah, so it’s that directional to go from black and white to sort of how we’re practicing. It is almost like auditing your progress versus necessarily the local outcome.

[0:34:00.2] Robert Wood: Exactly because right now, the ATO process it’s you effectively and this is per nest, you go through an initial ACO. You audit against a 100% of your controls and then every year, you do a third of those controls and then every three years you do the full ATO again and so for teams that are changing a lot more rapidly, then once a year that are practicing more agile development as oppose to your traditional waterfall development.

An annual review just doesn’t really cut it. It doesn’t really help. It gets in the way more so than it helps and so we need to find a way I think to measure the things that are going to protect the system year-round continuously - infrastructure as code or other good security practices and those are the things that are kind of give us a sense of good risk management year-round and continuously as opposed to having an auditor looking at a set of controls and paperwork once a year.

[0:34:56.2] Guy Podjarny: Yeah, they had plenty of time, it makes sense. Rob, thanks for sharing lots of good nuggets here all around basically just revolving around this notion of like distributing the security responsibility and kind of all the adapting that needs to happen around it. Before I kind of let you go over here, one last question. All of this is sort of past and the present, let’s kind of cast our eyes to the future a little bit further out.

If you took out your crystal ball and you thought someone doing your job in five years’ time, what would you say would be most different about the reality?

[0:35:30.5] Robert Wood: Yeah, I think it’s more and more likely that the job of a CISO in the future is more akin to managing a data platform and so you know, security telemetry is only going to increase. We’re only going to get it more frequently and more like streaming data as opposed to these point in time sort of snapshots of data and it’s almost like a chief data officer crafting intelligence out of this mass of data that we’re getting on an ongoing basis.

If I was to think about security as a big data platform and all of these things sort of dumping into this data link or data warehouse, whatever sort of way in which your set-up, I am able to craft or the CISO of the future is able to craft intelligence out of all of that and then share it out to produce intelligence products back to the people who need to be accountable for it, closer to the point where they need to make a decision on it, then that I think is really – it is a game changer for instilling this sense of ownership and accountability and in the way one might manage risk.

I think that brings in the need for data science talent in this field and not just building machine learning models to like triage stock alerts or find malware, things like that but it is really making sense out of this massive data that we sit on because typically we have all of these security activities that stove pipe themselves. They all sort of operate as a standalone thing or they just generate a report. They generate insights but they don’t generate data that can then go and be connected to something else to generate even more valuable insights.

It’s almost like thinking about security as a platform or security in a network effect sort of way and that’s my hunch. I hope that’s where we’re going or at least some flavor of it because I really believe that there is a lot of power in taking that approach because our field hasn’t really embraced the sort of big data revolution so to speak like other fields have. Marketing’s got it on lockdown, other fields have but we really haven’t yet and you know, if and when we do, I think there is a lot of opportunity in there for us.

[0:37:45.1] Guy Podjarny: Yeah, no fully agree – security is naturally invisible and you don’t really know what risk you’re taking on, so shining a spotlight there and understanding what is their smart data management makes a lot of sense. We’re out of time here. Rob, thanks a lot for coming onto the show and for sharing your insights.

[0:38:02.8] Robert Wood: Of course, yeah it was my pleasure and thank you for having me.

[0:38:05.8] Guy Podjarny: Thanks everybody for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[0:38:13.9] ANNOUNCER: Thanks for listening to The Secure Developer. That is all we have time for today. To find additional episodes and full transcriptions, visit thesecuredevelop.com. If you’d like to be a guest on the show or get involved with the community, you can also find us on Twitter at @devseccon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode. Bye for now.

[END]