Skip to main content
Episode 63

Season 5, Episode 63

Container Security, Microservices, And Chaos Engineering With Kelly Shortridge

Guests:

Kelly Shortridge

Listen on Apple PodcastsListen on Spotify Podcasts

On today’s episode, Guy Podjarny talks to Kelly Shortridge about security, microservices, and chaos engineering. Kelly is currently VP of product strategy at Capsule8, following product roles at SecurityScorecard, BAE Systems Applied Intelligence, as well as co-founding IperLane, a security startup which was acquired. Kelly is also known for presenting at international technology conferences, on topics ranging from behavioral economics at Infosec to chaos security engineering. In this episode, Kelly explains exactly what product strategy and management means, and goes into the relationships and tensions between dev, ops, and security and how that has changed. We also discuss container security and how it is different from any other end point security systems, as well as the difference between container security and microservices. Kelly believes that we are overlooking a lot of the benefits of microservices, as well as the applications for chaos engineering in security. Tune in to find out what changes Kelly sees happening in the industry, and see what advice she has for teams looking to level up their security!

共有

[0:01:19.1] Guy Podjarny: Hello everyone, welcome back to The Secure Developer. Today we have with us a guest that I’ve been sort of chasing for a while now, really excited to have her on the show. We have Kelly Shortridge with us, she’s the VP Product Management and Strategy at Capsule8 and also a really kind of brilliant analyst, I think, of the security industry, so welcome to the show Kelly. Thanks for coming on.

[0:01:37.6] Kelly Shortridge: Thank you so much for having me, and the very kind praise.

[0:01:40.7] Guy Podjarny: Just sort of say it like you see it there, you know? I think. I really love your writing. I guess, as we dig into it, before we’re going to explore a bunch of these things you’ve written about, and want to dig into. Tell us a bit about what is it you do and how you got into security in the first place?

[0:01:54.7] Kelly Shortridge: Yeah, my background is a little strange for the security industry. I actually started out my career in investment banking, so I was an M&A banker, covering information security as well as data analytics. With the overconfidence of youth, I decided to start my own startup straight after being an investment banking analyst, and founded the startup that later was acquired by one of the very big security companies today. Then from there, I perceived product roles at BAE Systems Applied Intelligence, SecurityScorecard, and now I’m VP at Capsule8.

[0:02:25.8] Guy Podjarny: Very cool, what does it mean, kind of running product management and strategy?

[0:02:30.6] Kelly Shortridge: Yeah, nobody knows what it means, but it gets people going, right? I think product strategy is especially nebulous. But, I think it’s ultimately about understanding the market and understanding user and customer problems in a way that helps you prioritize how you need to build the product, like what features are going to matter, and also how to communicate the problem to the market, and how you can kind of level up the community and understanding tangential problem areas.

[0:02:54.8] Guy Podjarny: You’ve been – again, a little bit of context maybe for the listeners is – can you tell us a little bit about some of the Capsule8 and I know before you were at SecurityScorecard, just to understand a bit of context where you were working more deeply on one security problem or the other.

[0:03:08.5] Kelly Shortridge: Yeah, they’re very disparate problems. SecurityScorecard deals with third-party risk management, so producing the famous letter grades, based on open source analysis of different security metrics. Then Capsule8 does enterprise and [inaudible 0:03:23] structure protection. Detection, and resilience, including things like automated response. Very different ends of the security spectrum. One’s more like compliance, the other is like you know, more of this modern sort of protection, looking ahead into the future. It’s been really interesting to see kind of the difference between the different user and buyer personas and how people think about kind of those problem areas differently. Both have slightly played into the research that I do, somewhat accidentally in my spare time.

Looking at things like the behavior of economics of Infosec, looking at just how we communicate different Infosec concepts. You can see how some of my day job is driven by what I pursue on nights and weekends.

[0:04:06.1] Guy Podjarny: Yeah. I guess that’s a good sort of tee up maybe for starting a conversation a bit about DevOps. You know, you’re talking about working with a sort of advanced protection of Linux systems and I must imagine that really traverses the worlds of ops and sort of infrastructure.

You’ve written a fair bit about sort of that intersection of DevOps and security. I guess, how do you see those two worlds – we’re going to explore how we see those two worlds kind of interact but at a high level, how do you see those two worlds? Are they getting closer or are they getting further apart? How do you see the interaction between those two coming?

[0:04:41.2] Kelly Shortridge: The way I like to characterize them today is they’re frenemies. There’s kind of a deep-seated dislike between the two groups, but they kind of know that they have to deal with each other, which is not a particularly healthy dynamic. I think it’s inevitable that they’re going to have to partner more and ideally it would not be an antagonistic relationship, which is some of what I’ve discussed. I think ultimately what we’re going to see kind of as we – especially now that acceleration of like the fabled digital transformation timelines, as a lot of the goals are similar. Frankly, if Infosec doesn’t have some of the same goals as DevOps, which are largely like how to ensure that operations stay stable and performant in order to enable the business and help it make money.

It’s kind of a basic thing, right? If Infosec’s against that, the question is, what is Infosec for? And can that survive as technology is increasingly the driver, or the most direct driver, of revenue for companies.

[0:05:37.7] Guy Podjarny: Yeah, for sure, it’s interesting. Do you think – I love this sort of the definition of frenemies for it. Do you think that changed, like do you think it was – was it at any point like sort of offset security versus dev and now it’s sort of security versus dev and ops?

[0:05:48.8] Kelly Shortridge: That’s a good question. I feel like security and dev, it’s been increasingly fraught relationship. I think if we go way back, both was covered by the sys-admin label back in like the 90s, and it’s just interesting how they deviated from each other. I think probably what exacerbated things is it used to almost be like this horrible frenemy triangle of like ops, dev, and security, where they all kind of disliked each other. But now that dev and ops have more of an alliance and they’re more collaborative, maybe now security is feeling extra threatened. I’m not totally sure I’m the reason but it does seem like tensions are very high.

[0:06:24.8] Guy Podjarny: Yeah, well I guess I relate to this being a challenge, and also of course this kind of must be solved, right? For us to progress. Let’s walk down that lane a little bit of sort of the DevOps and security themes. You’re a bit of a fan of controversial titles, but I know one of your writings, you know, not long ago, was around container security. The article was titled saying that it’s cool, but nobody knows what it means. That really begs the question of what does container security mean?

[0:06:51.6] Kelly Shortridge: Yeah, that’s a fantastic question. I think it means different things to different people, which is part of my struggle with the label, but I think ultimately it’s about ensuring containerized environments are resilient to security failure. I think that can take a few different forms. It can obviously take the form of like, we need to make sure that the images being used don’t have any particularly bad vulnerabilities in them, it can be more like what Capsule8 does, where we’re trying to make sure as containers are running, they’re not being compromised when they’re providing services on production.

But I think ultimately, if you anchor it to that like okay, these containers need to be resilient to security failure. I think that’s a great grounding principle, and I think also eliminates a lot of the very strange definitions or products people try to tie to it.

[0:07:35.5] Guy Podjarny: Is it like – I understand the high level statement – but is there are risk of it being too broad? Like, basically the resistance of X to security failures is pretty broad statement, no? Would you sort of be able to say mobile device, you’d be able to say VM, you’d be able to say desktop – is that not the case? Or is there something that you’d still say is different about container security versus just any endpoint security?

[0:08:00.4] Kelly Shortridge: Yeah, I think there are definitely differences. One is that there’s a much lower tolerance by, again, more of the ops team and what kind of failure can manifest. So I think they’re, in some ways, more tolerant of security failure than they are of performance failure. Certainly on the end point side, some of the things that can work on, laptops or – I think everyone, including myself, has experienced antivirus programs mandated by your employer freezing up your laptop, and it’s a huge pain. But the ops team is maybe going to strangle the security team, if one of the security tools does that in production, right?

I think it’s kind of similar, even if we look – and this is not a paid pitch, just for your company – but I think we look also on the vulnerability scanning side of containers as well. I think it’s important to acknowledge that the key differences, like you’re not having to scan across a bunch of really disparate monoliths. Ideally and have some sort of base image and image repository for those base images that have been very well-vetted. I think that’s maybe a nuanced difference between the old paradigm and application security, but I think it is an important one. I think, crucially, we need to start having people think about containers and VMs in different ways, because that maybe is where we need more specificity. I get tired of people complaining about the two and thinking then security needs are the same.

I definitely think like – resilient to security failure, right, that’s a very broad thing. I also think, if anything, the container security label has been too narrow, mostly around vulnerability scanning. Specifically scanning like a code repository, sometimes like CICD pipelines. But there’s also, there’s definitely a risk of it being too broad and people think they can just copy/paste what’s worked, like you said, on mobile or laptops over the containers and ops will kill them.

[0:09:45.7] Guy Podjarny: Yeah. Interesting. Makes a lot of sense. Basically one aspect of it – which I imagine containers share with other servers is, you know, you have to think about them in a different way to sort of patch management of a desktop, because of tolerance and also maybe the willingness. You’re saying that the performance overhead you’re allowed or sort of whatever error tolerance in containers is lower – do you perceive that it’s lower than VM’s as well? Do you think the sensitivity to performance – I get the delta between your desktop or your laptop and the server, but do you think there’s also a delta between like container performance overhead tolerance versus like a VM or a bare metal machine? 

[0:10:27.2] Kelly Shortridge: I think there’s a slightly bigger one. I think, for one, going back to your point about the rate – just the general kind of performance tolerance of containers. They make money and that’s a key difference between it and just like standard endpoints. I think as far as VMs and vanilla sort of bare metal servers, the key difference with containers I think is as we spend – it’s almost like a conceptual shift in a way that companies operate, and I don’t think you necessarily have to have containers to do this.

You can potentially use VMs. There’s just a much higher expectation for uptime at this point and I do think, realistically, that containers enable a lot of that scalability and just, frankly, the modularity they allow just enables a lot more speed. So if you have this crazy intense speed from the 2002 perspective. Again, no ops team is going to be willing to sacrifice uptime at this point, with some sort of security tool. I think, more than changing that dynamic, containers just like solidified it. I think it was still there before but now it’s easier tolerance.

I think they’re also slightly technical things where like containers are sharing like host resources, VM’s, like you still have for [inaudible 0:11:32] stack controls, and there are nuances there as well.

[0:11:35.0] Guy Podjarny: Yeah, of course, the technologies stack itself. Okay, it makes sense, I mean, we’re sort of unravelling all sorts of challenges with container security. First of all, there may be more or less tactically, there are more technical differences, they are probably sensitivity and this is – like I love that insight into device practices. If you’re using containers, you’re probably much more mindful of sort of uptime, boot time, all of those elements. The bar for a security tool is higher, and then you sort of have the development practices of like who is dictating what’s inside the container and all of that, which again, requires slightly different than the controls and the VM’s.

If all of that wasn’t hard enough, you know, containers that oftentimes kind of live in the microservice environment. Maybe is it the same to say container security and to say microservice security, what are you thoughts about microservices security?

[0:12:23.0] Kelly Shortridge: I actually published an article very recently about this. I think as far as what security needs to understand, I think thinking about microservices as containers and APIs is a good enough simplification. I don’t think microservices directly equals containers, so obviously containers are a key component for the most part. I would say in general, between the two, as we discussed, the industry level of understanding on containers is still pretty poor, especially threat-modeling them. I actually think there’s an opportunity for microservices architecture in general to be better understood, particularly by the veterans of the Infosec industry because they were probably around for the service-oriented architecture days, and frankly like microservices and so on are conceptually similar.

I don’t think it’s too much of a leap, it’s just what I found – like I was talking on the blog post, they’re still like a huge fear around it. It’s kind of strange because SOA didn’t cause – I mean, I wasn’t there to be perfectly honest and maybe it did, but from what I can gleam – SOA didn’t cause quite the same level of panic, and I’m not totally sure why. What I’ve kind of found is it seems like for a lot of the Infosec industry, and obviously not everyone, I think they still are thinking in that kind of like, on-prem monolith model, and they think that – they see microservices as just copying/pasting monoliths, with all of those equivalent challenges over a thousand instances and then making them public.

To that lens, it makes sense why you would panic, but that’s not really what the reality is and that’s not really the threat model or how it works. That’s one of the challenges and I certainly think there are also overlooked benefits as far as microservices too.

[0:13:55.5] Guy Podjarny: Overlooked benefits to security?

[0:13:58.1] Kelly Shortridge: Yeah, definitely. Standardization, for example. Security could be partnering with their engineering colleagues to ensure any new deployments adhere to a standard and make sure the standardized APIs, images, whatever else – we can just leave that in – instead of having to conduct like a bunch of different security reviews on a ton of different resources and components.

You can actually, as a security team, streamline your work to really just ensure the based image or design is solid. You would think that security would love the idea of less work but again, I don’t know if it’s lack of awareness or what. Lack of control. I’m not sure what it is, but I think standardization is one of the key things in microservices that, if anything, makes security’s job easier, and I would argue that a lot of the complexity that’s added with microservices architecture – because obviously it does have downsides – is much worse on the engineering and components side than it is on the security side.

[0:14:48.8] Guy Podjarny: Interesting. I guess maybe let’s dig a little bit into these benefits, and I like to find those kind of opportunities maybe that are positive, gems in there. One is standardization. These services run in a consistent way because they’re deployed in a consistent way, managed in a consistent way, so we have that standardized. I guess, is it fair to say that also, they communicate via better defined APIs? You have like a more defined scope.

One of the challenges that I’m often puzzled by, when I think about microservices, is really kind of what’s the whole? What have you seen for sort of best practices around – well, I’ve got like these 20 microservices, you know, not to say 2000, a single request or a single kind of action in my system might traverse in some number of those until it’s sort of completes and gets back to the client or not.

What’s the threat model approach or how would you tackle understanding what’s the whole, you know? What’s the complete view to be able to ascertain if it’s secure, right? If we didn’t just let three systems that work well independently just sort of expose something when it’s in between.

[0:15:48.9] Kelly Shortridge: Yes, I think that system level model is super important. I would argue also, it’s not that different than the prior paradigm of what we had to do. Where I’d argue it’s better is that instead of having to tease a part like a monolith and to figure out – this is where we identify, this business logic, how does it connect to this – so generally with microservices, you’re going to have much better and very tightly defined business logic that you can threat model very directly. You can even start to say like okay, we have this pattern and standard for, let’s say, how you authenticate whatever and the developers can basically just copy/paste that instead of building their own each time. 

Again that kind of modularization is a key component of it. I think also again, being able to threat model and see very distinctly, it’s like a bite-sized chunk. Okay, we know that with this microservice, it’s just talking to this other microservice, that uses this exact data, versus trying to again tease a part of this kind of like giant chunk, and figure out all the different complements. It’s actually much easier for the security team than it used to be – 

Again, I don’t think the industry is really thinking of it that way and it can be somewhat complex if you have to threat model, literally every microservice, but there’s some like you can start – I’m actually writing this in one of the upcoming books I’m writing – is that you can start to actually create much better kind of automated security workflows where you can, as a developer, go through a checklist of like okay, I’ll be consuming sensitive customer data, yes or no? And basically work down the list and figure out, does this microservice actually need a security review?

Obviously, you have to have good feedback loops to figure out whether that process was successful, if they end up having an incident, then those privileges get revoked as far as going through the checklist. You can actually offload a lot of the work that security does today on the development team using microservices, because of that narrow scope each microservice has. It becomes much easier just to understand what the security implications are.

[0:17:40.2] Guy Podjarny: I love that approach. I mean, I think you really well neutralized this sort of concern I raised, which is – for starters, it was there all along. Just in code instead of in microservices, except before, you didn’t have visibility to test each of these different components and now you do. I guess there’s probably like one scenario but that’s way down the list of what if one of the microservices was compromised and then you – and then that goes off and it is now able to sort of reach others but, again, you would have had one part of the code that is compromised and before you didn’t have any insulation. Now you might have at least some. 

[0:18:10.8] Kelly Shortridge: Definitely and I would argue that again with the container plus API model if you know exactly what the business logic should be, we can actually much more tightly allow listings, the APIs, to know it can literally it can only talk to this one other service. Again, if you compromise the container in some way, you can laterally move and stuff like that, but ideally that is when we have at least some sort of monitoring and then it gets into the dimodel I talked about in my security KS engineering talk from Black Hat where ideally you have the ability to just shut down the container if you have any sort of modification to the container itself. So if you use unreadable containers, it becomes harder for the attackers to laterally move anyway. So again it opens up these cool and frankly easier opportunities for security. It doesn’t mean that there aren’t downsides, honestly but I think again, if anything security benefits more in a weird way than operations sometimes. 

[0:19:02.6] Guy Podjarny: I love the argument. I think you are making really strong case for how microservices, you know, they seem scary because they change things but fundamentally they set you up better for security for it. So I want to roam on a little bit to that sort of comment on chaos engineering. Definitely one of my favorite practices I think in the world of dev ops, chaos engineering, shutting down systems even not them being shut down on their own. 

In the ops world, I think it is one of the primary tools to be able to deal with the unknown. It introduces risk. It’s even called attacks oftentimes within these tools, say it runs it – and you talk about how do you use chaos engineering for security. You just gave us a bit of a taste for it but can you walk us through how do we security test or apply chaos engineering to security?

[0:19:43.6] Kelly Shortridge: I think there are a ton of ways and as I eluded to earlier, please do check it out when it’s available this year, I am writing literally the book for the fair and writing a part on security chaos engineering. So obviously I think it’s an important topic. I think the key thing is, if anything, it is almost a philosophical shift that is necessary for security, which is just embracing that failure is inevitable and that it can really be powerful. 

I think one of the problems we see in security is that feedback loops aren't embraced. In fact a lot of times we shy away from feedback because, if we know about an incident, then it’s an incident, right? And if we view – and I think this is a huge problem in security in general like – if we view everything through the lens of, “Oh a human caused this, a human did something wrong,” then of course we are going to try to avoid feedback, and also other people won’t try to give us feedback. 

It is this horrible kind of paradigm we have today in security chaos engineering I view it as really freeing us from that paradigm. It says, “You know what? Failure is inevitable. It is okay, we just have to be prepared for it and we have to be able to be resilient through it.” That doesn’t just mean we can withstand attacks. It means we can withstand at least some of the attack and be able to minimize the impact, that’s crucial. 

We can also adapt though. We can reorganize systems around the threat. We can actually transform our systems over time, based on that feedback, to make sure that we’re evolving at the same pace as attackers – probably most of the people who are listening would agree that defense is soften a lot slower than attack, in being able to evolve your methods and systems and technologies.

Security chaos engineering basically says, “No, that is not how the world works. You need to ingest this feedback and learn from it, and have this really intellectually honest model where you embrace any information that failure gives you, and just view it as a learning opportunity.” There is no shame in it. Like it happens to everyone right? 

[0:21:28.8] Guy Podjarny: Yeah, absolutely I think it is the principle there. Still I’d love to go – maybe can you give us one or two examples of what is the type of attack that you would do? Like chaos engineering examples there on the DevOps side is you shut down a service, or you reduce the memory, or you sever a network connection. What are the types of attacks or what chaos would you inflict under the mantle of chaos security engineering or security chaos engineering?

[0:21:53.6] Kelly Shortridge: Definitely. This is definitely the fun stuff and there is a ton of stuff you can do here at a variety of levels. So some of the examples we talked about before and some of my favorite ones is, for instance, going back to containers, like you make your container imitable. All you have to do is inject attempts to SSH into the container and then write to disk, and ideally it should crash the container but that is a good example of how does a container react. 

You can also do things with the API’s, like you can revoke tokens and then try to reuse old tokens to see where they’re accepted throughout your service or your general system. You can actually adopt a lot of some of the tests that you mentioned on the availability side because that is where a lot of operation is focused because uptime is paramount on that side. So you can do lots of cool things like I would argue that some of the stuff like Netflix has done makes them incredibly resilient to DDOS attacks, which is obviously a security thing but yeah if you are testing whether – to what extent you’re shutting down a bunch of your systems and injecting failure to a bunch of your nodes at random affects your service delivery, that makes it incredibly resilient against any sort of DDOS. I think that is a great way to start on the network side creating things where if you have an automation server like Jenkins, sometimes you can – 

They generally have some sort of consult script, like can you access that with an anonymous user? Can you create an anonymous user in the first place? There are a lot of things where you can just inject various failures, depending obviously on your threat model and your priorities and just see how the system responds, which obviously requires a certain level of visibility and telemetry collection. You should probably already be collecting that anyway. 

[0:23:25.5] Guy Podjarny: Yeah and I think it comes back to the commentary on microservices and their API’s right? Because you can basically build those types of chaos tests for any given microservice and then see how the overall system responds, right? And when one of them fails to see how the relation is. Yeah, I definitely love the concept and like I said I have always been a fan of chaos engineering. So it would be even more so when security starts embracing it and definitely looking forward to that book coming out. 

[0:23:49.9] Kelly Shortridge: Yeah and it’ll make security fun right? 

[0:23:51.8] Guy Podjarny: Yeah indeed. So I guess we are traversing down this sort of DevOps narratives and I guess we naturally ended up talking about technology and then we talked about practices like microservices and chaos engineering. Great observations there. Probably a good place to land now is people. So in the world of ops, many would say the DevOps is really first and foremost, a cultural revolution, a people revolution, and then that drives these processes and technology. 

One of the primary roles that’s been discussed is, or that came to be, is this SRE right? This sort of Service Reliability Engineer, that is an engineer that replaces maybe that sort of past sysadmin. It is helping developers keep the system up while maybe as opposed to someone would maybe having kept to those systems up themselves. What do you think? Do you think that is something that will happen in the world of security as well? Like, if you think about this security SRE or the likes, is that a role that makes sense? 

What do you think would happen to job descriptions or job scopes in security as this type of security DevOps approaches get embraced? 

[0:24:52.1] Kelly Shortridge: That is a fantastic question and I do not have a crystal ball so your listeners should take this with a grain of salt, but I think ultimately what we’ll see is that the security role takes on a form more akin to advisory and subject matter expertise. Where if you assume that, just like dev and ops ended up merging because you can’t have accountability separate from responsibility, otherwise you’ll have all these more [inaudible 0:25:11] issues.

We need to see the same thing on the security side, which means that security has to lend their expertise to the product and development teams, but they shouldn’t be the ones fixing things and they shouldn’t be the ones actually implementing things. They should be helping threat model and thinking about, from the very beginning, starting with design, how can we build things more securely. I think it is actually really positive that we would move away from the security engineer who just manages the blinky boxes, more to someone who is frankly more strategic in nature and happens to be an expert on a variety of things.

For some people that’s really exciting. For others, obviously that’s not necessarily their strong suit. I think we are going to see more of the advisory role. I also think SRE could end up merging a little bit with security, just because if you look at internet response processes and how they’re conducted, they largely overlap between what SRE’s will do or just generally on the ops side, and what security does – a lot of times, the security team has to go tag the SRE anyway during an incident, because the SRE is the one managing the piece of infrastructure that may have been compromised. So then we are going to start to see a lot more blending of that role just frankly out of efficiency, and because it is hard for a security person to properly investigate and respond to an incident in technology from which they’re really removed.

So this all goes back to a certain point, maybe we just go back to this is admin label, and we’ll be back in the late 90’s. 

[0:26:37.7] Guy Podjarny: Yeah I guess to an extent I like the full circle because it does imply more of a merging. I guess there is a lot of comments on DevSecOps is really just DevOps, it’s DevOps done right in sort of embedding security into it, whether you think as you point out that the security industry itself and the role that people there take need to change.

If I can maybe take you up a level a little bit because I know in a lot of your work, whether it is in Capsule8 or in past roles, is maybe a little bit closer to a bunch of these activities – but the other thing that you do is you do survey the industry as a whole and you have this great newsletter. What do you think – a bunch of these practices or changes happen at – I guess you have mentioned some that are industry wide and some that are inside the organization. How do you think we’re adapting to this as an industry, to this change? 

Are we doing a good job, are we doing a bad job, what changes would you foresee happening in the industry? 

[0:27:33.2] Kelly Shortridge: I think we are doing a bad job. I think in general, vendors with security do quite a bad job serving the practitioners. I would argue that a lot of the practitioners are also not thinking quite as, again, intellectually – honestly – as they could be. I think part of that is we don’t benefit particularly much from this merging, in the sense that right now we are our own fiefdom, and that gets eroded.

We’d still be equally as important as far as priority. In fact potentially more important, because if we are seen as enabling the business, we might receive a bigger budget. The thing is we won’t be in that silo anymore, and so we’ll be more in a republic rather than that fiefdom, and I can understand why a few people would be upset about that.

The other thing is in DevOps everything is measured. You can see tangible outcomes and you get that through metrics and telemetry. That doesn’t happen in security. And I guarantee you a lot of vendors would be terrified by the idea that they have to be measured based on their outcomes, and they come up with their own metrics that are very fancy and basically vanity metrics to try to justify spending on them. If you look at tangible outcomes, a lot of times the evidence is pretty scarce. So I think it’s a bit of a ‘come to Jesus moment’, in a certain respect.

If you start to go this world where in DevOps it is more outcome driven, and you have to enable the business and you should be able to measure outcomes and you’re trying to reduce things like time to response, security is just not there and it is an existential threat. I think it is more of an existential threat to the vendors, obviously to the practitioners, and I think that is why we see a lot of pushback. I think why the term DevSecOps has been embraced rather than just trying to make sure that you fit within DevOps because then – or at least equally as important in this republic. Like our name is still on there so that is what matters which I think is kind of silly.

That is my outtake for the podcast, is I think it is a very silly thing to do and very ego-driven. It is going to be a tough road to get there. I think honestly we’re going to continue to see some of the best improvements on the security side come from infrastructure tools. I think we have seen that – and even – like I mentioned, with CDM’s pretty much eradicated DDOS attacks. And that wasn’t strictly a security tool. Like if anything that was more of a performance tool and I think we’re just going to see more of that and that is my worry for Infosec, is they’re going to be so resistant that they’re going to end up being left behind because they are going to dig in their heels, which I don’t think anyone wants. But as far as what they could be doing more is listening. Honestly, a lot of it is listening, recognizing the commonalities, recognizing that we can’t just keep existing without measuring our progress and our efficacy. That we have to wisen up, and that DevOps does want to hear expertise, they just don’t want to hear, “No you can’t do that,” without justification, and because you say, “Well, because security,” and don’t give alternatives that still enable the business.

Like ultimately businesses, for better or worse, are about making money and security gets in the way of that. Guess who’s going to win that fight? It is going to be the group that is making money. 

[0:30:27.2] Guy Podjarny: Yeah, definitely very well phrased. How would you respond? – like you mentioned a whole bunch of things about causes and what should vendors sort of anticipate, if you are a consumer of security solutions? What would you change in your approach if you’re a security team and you’re still faced – like the reality is the industry offers you a certain set of solutions – how do you adapt or change your perspective as you evaluate solutions? 

[0:30:50.4] Kelly Shortridge: Honestly I think the biggest thing is focusing more on process rather than tech. I feel like in that people-process-technology triad, people ignore – at least in Infosec – ignore the process part the most. Ultimately DevOps is not a set of technologies, it is a set of practices that leads to processes. I think Infosec needs to look at those processes a lot closer and start adopting them. I think we have tried to ‘technology’ our way out of bad processes. We tried to ‘people’ our way out of that processes, which is why you hear about the skill shortage all the time, and it hasn’t worked.

I think if we are looking to move forward as an industry, you need to take a really close look at how we are actually conducting our security processes, starting to try to build those feedback loops. Frankly, a lot of it is a cultural change. I know culture is kind of seen as ho-hum in Infosec because it not technical or sexy, but they are really damaging things that come from, for instance, feeling human error is the cause of breaches, and trying to resist failure rather than embrace it, because it means that your process is not going to be failure tolerant. 

If failure is reality, then ultimately you are not tolerant to security failure, and to me that is a job very poorly done. With those kind of assumptions you are going to focus on technology that tries to eradicate human error, which is just an impossibility, evidenced in basically the whole course of human history. You are going to try to actually stop attacks and things like that, which I think we all know there is no tool that can actually stop all attacks.

There was interesting research out of IDC recently, regarding the whole COVID situation, is that companies had focused so much on investing in technology, they had never actually thought about processes – just – what if everyone has to work from home? How do we make sure we have business continuity? 

I think that sort of stuff just won’t fly in a DevOps sort of world. You have to be able to be flexible and adaptable to evolving situations, and Infosec just isn’t today. 

[0:32:47.5] Guy Podjarny: Yeah, well at this scale I think every word is gold there. I think very well said. I feel like I can keep asking you questions here for a while, but I think we are running out of time. I like to ask, you have already given a whole bunch of advice, but still I like to ask every guest on the show before we close off: if you have a team that is looking to level up their security foo, what is the one bit of advice that you would give them? What they should start doing or maybe they should stop doing? 

[0:33:12.4] Kelly Shortridge: Yeah that is a great question, trying to boil it to just one, but I would say my perennial advice is always: raise the cost of attack and be realistic about that. So don’t focus on the Masood level threat. Focus on what is realistic, which is like phishing and really basic sort of attacks, and figure out how – what simple interventions you can use to just cut off that low hanging fruit for attackers? Like two-factor would be a great example. 

Start with the basics so think of it always as how can we make this harder for the attacker? I think that is kind of in some ways as simple as that and if you anchor yourself to that and you start there you won’t be over-optimizing for things like blocking 0-day or all of these side channel attacks or whatever else – because that until you’re really elite, that stuff you shouldn’t bother for the most part. 

[0:34:00.1] Guy Podjarny: Yeah, most of the time though, countries are after you so.

[0:34:02.5] Kelly Shortridge: Yeah, exactly. 

[0:34:03.9] Guy Podjarny: Great advice. Kelly this has been a pleasure. Thanks a lot for coming onto the show. 

[0:34:08.1] Kelly Shortridge: Thank you so much for having me on. 

[0:34:10.4] Guy Podjarny: And thanks everybody for tuning in, and I hope you join us for the next one. 

[END OF INTERVIEW]

Up next

Open Source Security And Technical Management With Ryan Ware

Episode 64

Open Source Security And Technical Management With Ryan Ware

View episode
DevSpecOps - Developing A Better Software Delivery Model With Alyssa Miller

Episode 65

DevSpecOps - Developing A Better Software Delivery Model With Alyssa Miller

View episode
Level Up Your Security Champions With Yashvier Kosaraju

Episode 66

Level Up Your Security Champions With Yashvier Kosaraju

View episode
Security Chaos Engineering - What Is It And Why Should You Care With Aaron Rinehart

Episode 67

Security Chaos Engineering - What Is It And Why Should You Care With Aaron Rinehart

View episode