Skip to main content
Episode 58

Season 5, Episode 58

Advocating For The Securability Measure With Shannon Lietz

Guests:
Shannon Lietz

Shannon Lietz

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 58 of The Secure Developer, Guy Podjarny talks to Shannon Lietz, DevSecOps Leader and Director at Intuit. Shannon is a multi-award winning leader and security innovation visionary with 20 years of experience in motivating high performance teams. 

Today on The Secure Developer, we interview Shannon Lietz from Intuit. She is a multi-award winning leader and security innovation visionary with 20 years of experience in motivating high-performance teams. Her accolades include winning the Scott Cook Innovation Award in 2014 for developing a new cloud security program to protect sensitive data in AWS. She has a development, security, and operations background, working for several Fortune 500 companies. Currently, she is at Intuit where she leads a team of DevSecOps engineers. In this episode, she talks about the future of security and the progress the industry has made in closing the vulnerability gaps by, inter alia, maintaining continuous testing, ongoing production, and building sufficient capability within teams to know a good test from a bad one. But the problem is a long way from solved, and she shares with enthusiasm about the new buzzword called “securability” and how this measure can be standardized to uplift the security industry as a whole.

Transcript

[0:01:27.9] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning in. Today, we have really maybe one of the originators, the pioneers of DevSecOps with us and really a bright security mind in Shannon Lietz from Intuit. Thank for coming out to the show, Shannon.

[0:01:42.2] Shannon Lietz: Super excited to be here. I love this show.

[0:01:46.4] Guy Podjarny: Shannon, we have a whole bunch of topics to cover. Before we dig in, tell us a little bit about yourself. What is it you do? How you got into security?

[0:01:53.5] Shannon Lietz: Awesome. Yeah, I've been in this industry for over 30 years and that makes me a dinosaur, as I always say. I feel the placement journey on an ad is to really try and help the industry and take some of the lessons I've learned over that long career and really try to make a change. 

My goal at this point is really to make a dent in the security problem as a goal for my life and my career.

As part of it, I got into this basically with lots of curiosity and didn't even realize it was a mostly male journey. Nobody told me when I decided that computers were fun. I learned through lots of hard knocks, but basically this wasn't a path carved out for women. I thought, “You know what? The heck with it. I always do things that people tell me I shouldn't be doing.” I started out with computers at a really young age and eventually, learned how to do some really neat things that again, shouldn't have been done.

At the time, they called it hacking. I thought, “Well, you know what? I want to be a hacker, so cool.” Then eventually, it became illegal and I was like, “Okay, that's not a job.” My dad was horrified by the fact that this could be a problem. Eventually, it turned into actually it was a job. You just had to do it a certain way. That was the beginning. I mean, when I started in computers, nothing was really illegal per se. The Computer Fraud and Abuse Act was interesting and that shaped some of this industry.

Along the way, there's lots of trials and tribulations. Yeah, I started there and I've been a developer, so I've written code. I'm so sorry to anybody who's still maintaining my code, God forbid. Then as you look back on 30 years, you’re like, “Wow, I could have done a lot of better things.”

Then I got into the security and I've even done ops. I always said that if I needed to make money and pay my bills that I would ops for food, and so I ops for food. Then eventually, I smooshed it all together and created a term that some love and some hate and whether – here we are.

[0:03:50.9] Guy Podjarny: Yeah. Definitely has become the terminology of choice, the depth of the – we had a rugged DevOps, we had also some variance, but it's very clear that DevSecOps is the term that emerged.

[0:04:02.0] Shannon Lietz: That's cool, because I've got a new one coming.

[0:04:06.0] Guy Podjarny: We’ve got some great further pioneering here to air on the show. Just a little bit from a companies and industries’ experience and so we don’t completely jumped around, like a whole bunch of things. I think right now, you are at Intuit, right? Before that, you were at ServiceNow?

[0:04:23.9] Shannon Lietz: I was. I was at that wonderful other cloud company. I like cloud companies as they seem to be fun. I was also at Sony before that. I mean, my track record is pretty much financial. I did telco work. I mean, I've had about 22 companies that worked for in this period. I've been at Intuit now for almost eight years, which is the longest job I've ever had.

[0:04:44.3] Guy Podjarny: Yeah. Well, definitely changing the streak here. What is it you do at Intuit?

[0:04:47.9] Shannon Lietz: I run the red team here at Intuit. It's relatively large. I would say it's an adversary management practice. A lot of people think of red team as something that's relatively surprising. We put a lot of science behind our red team capabilities. We've really been working on moving it forward to adversary management and trying to make it so that we make this more scientific. I'm into a lot of math and science around trying to artfully measure the things that we all want to do, which is to make software more rugged and to make things more resilient, so that we can do the things we love, which is solve human problems.

[0:05:21.6] Guy Podjarny: When you talk about red team, what's the – I like to geek out a little bit about org structure.

[0:05:25.6] Shannon Lietz: Totally.

[0:05:26.2] Guy Podjarny: What does that red team get divided into?

[0:05:28.6] Shannon Lietz: I got to measure my red team recently to find out how many headcount I had. I was pretty surprised. We have about 53 people and we also just started a part of the red team in Israel. I've got four more people there that are doing red team. Actually, we've been pushing the bounds. We're applying more to application security and also, business logic issues. That's neat. I think that we're always the willing participants to emerge and try to innovate in a lot of different security spaces. I'm excited to see how that really advances us.

My org structure, I have mixed threat Intel with our red teamers. Also, we have this other group that basically runs a continuous exploit system. Essentially, we built containers to essentially exploit all the things that we worry about, so people can feel pretty comfortable that things are going 24 by 7. Internally, yeah.

[0:06:27.5] Guy Podjarny: Are these a known set of attacks? Is it more regression-minded, or is it more almost bug bounty? Like something that –

[0:06:34.3] Shannon Lietz: Yes. I say that way, because it's a mix of a lot of things. Anything that could cause us to have a security escape is something that we put into this engine. The way that I tell it is if you can conceive of it and it could be automated, it should go onto our platform and that platform basically runs it across our attack surface, for our production attack surface, our internal attack surface.

Not everything yet that I'd love to see it do but eventually my feeling is that that platform becomes really the way for us to continually level up against some of the exploitable surface. I think it's the way in which most companies are going to need to go and I think it's the path forward, is to really figure out how to do full resilience and regression testing against your attack surface for both the things that you know and the things that you learn and pull that information in to essentially get there before the adversaries do.

The big mission is get ahead and stay ahead of adversaries and understand your adversaries for your applications. I think that people design their software, but they don't think about what the potential anti-cases are, or thinking about – I always say that security is basically a developer's edge case.

The security edge case is really important, but a lot of times people don't have time for it. My job in my mind is to make it faster for people to think about the edge case that it’s going to give an adversary an advantage, to allow business to do what it needs to do and figure out where the risks are to help mitigate those risks, so that we can do the things that help solve customer problems. Instead of - everybody's been talking the road to no. I got to tell you, early in my career, I was the road to no. Everybody would rat around me. It was the coolest thing. I was definitely that little picture with the little house in the middle, or the little guard shack in the middle and the gates and the snow.

I love that one, because it's always a reminder to me. I actually framed a copy for myself, so I could keep myself humble. Because the people that now I feel we support and subscribe to the things that we do to help them, they're coming, they're asking and that behavioral change was something that had to start in us first, in me first and then basically extends out to everybody that you touch and help.

I think that being meaningful in their lives to try and help change how people think about things is actually the journey forward for security. For me, this adversary management capability has extended into things like we're using AI, ML. Now my team fully codes. When I first started this, I remember I have this really cool little – with DevSecOps, I have this really cool little presentation and I framed that for myself, because we do these things to remind yourself of where you came from.

It had a snail and a hare in it and a little lane that I developed. I was trying to explain basically, this was the path to go faster in a secure way and a safer way. I'll never forget that, because I delivered it here in San Diego to the ISSA. It was a small room of about 30 or 40 people who had never heard of what DevSecOps was and they were like, “This lady's crazy.” I think it's been eight years since that talk. It feels like it just flew by and there's so many people now that you hear are starting to see more security in software that their products and services are getting better. Is it perfect? No. Have we taken a significant dent out of the stuff that was out there at one point? I think the answer is yes.

I just saw some metrics from github about how the fact that they have vulnerabilities showing up and 20% of all the vulnerabilities that are showing up, they basically have seen that they're getting closed. That's in no small part to a lot of companies that are out there that are providing that information to developers, so that they know about these things, that they're not having to go figure it out on their own.

I mean, for the companies I've worked for where that wasn't available, developers are like, “What should I worry about?” We're like, “Oh, we just need to go get CBSS for you and here's a set of a spreadsheet. Go figure it out for yourself, dude. Thanks.” I think that was a serious problem, because it inhibited their ability to develop safe software, because they didn't have the time to go figure out and crunch the spreadsheets. I mean, let's all be honest. That's basically a full-time job for a security practitioner. Something has to be able to build the software, so you can do something with it. From my perspective, there's a lot that goes into this.

[0:11:00.9] Guy Podjarny: There's a bunch to unpack. I want to make sure I was taking you like a bunch of a subsequent questions. Let me backtrack a little bit. First on the – I love that notion of that continue effect. Yeah, this is something to use exploits of the elements. I've often thought about bag boundaries and the likes as I almost like the continuous monitoring, or chronic monitoring, or telling you if something is wrong.

I think this type of internal system makes a lot of sense to ensure that the questions you know to ask at least alongside that the red teaming and the creativity get continuously asked and you don't go back and that you can go buy that at scale. How do you maintain that? Do you take feeds in from basically the botnets out there? Is it more about fixes or problems that before they have already seen in your surroundings? What would you say are rough primary ingredients of the types of attacks that get run?

[0:11:55.1] Shannon Lietz: Oh, gosh. We take in everything. If there's no data set, I turn away, because honestly, there's always nuggets in everything. They always tell you like, no two scanners actually scan the same things. They never scan them alike the same way. I think people are really creative about how they test for security problems, so we take in any bit of data we can get. We've taken in stuff from a variety of product vendors who do scanning. We're looking at the build materials, companies all the stuff we can get from them. Anybody who is basically asserting a CVSS score, a CPE score, a score of any type that would actually reflect a vulnerability of a significant risk. All of those things are useful.

To me, they're lagging indicators, however. The other things that we take in is threat intel. We're constantly looking for vendors and providers that have information that can help us get ahead. Why not be able to find the zero day, or what about signatures that are actually written against your company specifically? Why not harvest those, use them, learn from them and then replay them against your systems? Because essentially, that's a really great way to be able to build up your catalog of how to make yourself harder to beat from a resilience perspective. That took a lot of years to learn.

I will tell you this is not, “Hey, by the way, this is what we're going to do,” eight years ago. It was a lot of trials and tribulations and my little sign on the back wall here that basically says, “Bang your head here.” It's been banged a lot of times. I mean, hey.

[0:13:18.6] Guy Podjarny: You take all that information and then your team you said, codes. And builds it like this is an operational system that runs and runs those experts against production, or more against staging [inaudible 0:13:28.9]? How do you handle that?

[0:13:30.4] Shannon Lietz: What is production? I mean, that’s really cool. We got we got rid of that in what? 1980? No, I'm just kidding. Production to me is everything. Nowadays, even development systems are production, right? Even a lot of these capabilities that are out there, they're significant in a way that you'd think. Pretty much at this point, if your developers are down and the productivity is lacking, aren't you down essentially?

[0:13:57.2] Guy Podjarny: Absolutely. I love the approach. All these systems are production and they're all impactful to the systems. Oftentimes, the one of the concerns that happens when you run these continuous tests against the customer for saying, we’re moving to more production –

[0:14:11.5] Shannon Lietz: Production?

[0:14:12.4] Guy Podjarny: When you run it, there's always this fear of, “Hey, you're going to mess something up.” [Inaudible 0:14:16.2].

[0:14:17.1] Shannon Lietz: Don't take production down. It's the one rule, right? That one rule. Don't take production down, which is why you've got to think about everything as production. If you delineate that this system is okay, but that system is not okay, to me you miss the major principle, which is if you're going to do resilience testing, you need to be mindful of the things that you're testing. You need to test your tests, right? That's a thing.

You need to be able to build your tests in a meaningful way, not just throwing garbage at a system, but throwing something that's precision-oriented, that you're looking for a specific issue and that you're actually harvesting that issue as an escape. Not that you're poking around at it in a way that actually doesn't really provide that precision. My mindset about testing and production and resilience testing is that major principle. Everybody's always said like, “What are your rules for your team?” I'm like, “I have one rule. Don't take production down.” Because honestly, that's actually a meaningful issue for most companies, especially ones that are in the software industry.

I think the second piece of this puzzle for us is build enough capability in your teams to understand what's a good test, what's not a good test, have that scientific set of principles about how you actually develop those tests to be able to make it so that they work in your organization. That's essentially why I think – I'd love to say that eventually, this trade craft will be able to be moved into the teams, that's possible and I think that as we commoditize in the industry that these tests that you could run are actually being built by external companies and there's ways to actually create them and they can be tuned and tweaked and developers could run them.

I think it absolutely is possible for us to get to true DevSecOps, which is a developer can build safe software, operate it and they can eventually continually secure it and have it resilient against attackers. I eventually think that that is possible for an individual to be able to do those things, but not without assistance. It's not without buying specialty capabilities. We have to as a industry in my mind, be able to create that Nirvana, so that we're not also burdening people.

What I would say right now is if you look at some of the surveys that have come out, the DevOps, DevSecOps surveys about burnout and some of those things, well, the problem - and I did a huge study on this - is we're not seeing enough investment in small businesses that are trying to solve the commoditization of security in the way that it's actually going to be meaningful. Because I'm not sure that people really grok the full problem space of making it so that developers could leverage these services and capabilities, so that they can do the work of integrating it, but they don't necessarily have to invent and understand every facet of it, so that they're the expert practitioner.

Because, I just think that's what the difference is between having a security team that's off to the side that does it for people and having it be something that somebody can fully integrate into their workload.

[0:17:19.4] Guy Podjarny: Yeah, absolutely. I love also, so you mentioned about how your team now codes and that was actually one of the other bits that really – how have you seen that skill set? This is definitely a forward-thinking approach and I see a lot of the guests in the show talk about how their teams today code. How have you seen the evolution there? What were some of the –again, you've been touting DevSecOps for a while. What was your timeline and your views of changing that skillset? Which skills do you feel are needed less, if you're assuming you don't just want to increasingly perfect individuals on the team to build –

[0:17:55.1] Shannon Lietz: How do you trade it?

[0:17:56.0] Guy Podjarny: Sacrifice more coding skills today.

[0:17:59.7] Shannon Lietz: Yeah, exactly. How do you trade the workload of today for the workload of tomorrow? It’s definitely a challenge. I think when I first got started, I probably trivialized it a little bit, because I already had some coding skills so I was rebranding it to myself and realizing it's important in my life.

At the time, as a oversight on my part to be so cavalier about it being less than difficult, because I think it is a difficult practice to be a developer. I think there's so many things to consider, like you're not just code slinging if you were. You're actually looking at the human problem and trying to find an elegant solution that can be easy for people to really embrace. You're lowering the complexity for them, right?

When we first got started, I think it was like, well, Ruby's easy enough. Let's all do Ruby. There were some definite opinions about whether we would do Ruby or all the other languages of choice. Frankly –

[0:18:55.3] Guy Podjarny: There hasn’t been [inaudible 0:18:56.2] languages.

[0:18:57.7] Shannon Lietz: No, never. There’s never opinion in the bunch for that at all. I had a few people who could write some Ruby code and I had some people who do Java and this, that, the other thing. I think Ruby ultimately, because Metasploit was on Ruby and well, a bunch of people have done modules and things like that. It was just easier that way. There's definitely a lot of hacking tools that started out in Ruby that's migrated to different languages.

Some of my team now does Python. We've definitely gone after different languages along the way. Some folks are doing Go. Everything has its place. When we first got started, it was easier for us to all go together on one language that was going to help level everybody up. Meaning, it was easy enough, it wasn't necessarily a compiled language. You didn't have to get onto all the harder stuff. We started with what I would consider an easier language to address. Some might actually find that to be different, right? They might say, “Hey, Ruby's not that easy.”

I'll say that that was just a choice that we made together. It started with only a few people and obviously, now most of my team that codes. I can't even think of one person on the team at this point that doesn't code. If a manager has to do something quite often, they're breaking open a SQL query at the least to go run even a report as an example.

Even the managers are finding themselves having to code. They're putting things together, snapping in APIs. That was a big thing now. The question is what do you really trade off? I would say and I'm going to say it, because I think it's really what does get traded off. I think your code migrates into from policy into code, and so you're not writing as many documents, frankly. I think that code that's well documented is really a wonderful thing. I don't think enough people put enough comments in their code at this point. I read code all the time and I'm like, “Could you just comment a little bit more? I don't know why you made that choice.”

[0:20:48.9] Guy Podjarny: [Inaudible 0:20:48.11].

[0:20:49.9] Shannon Lietz: No opinions. No strong opinions at all. Over-commented code is also a disaster, so I know. I would say where the industry seems to be heading is we're lightening up on documentation. There's reams of paper that are being saved and trees across the world that have been released from the horrible death of paper policies. I think that's actually where some of it's coming from.

I also think that the other thing that is fueling the ability to migrate from one to the other is there's not as many meetings. It used to be that security was a meeting after a meeting after a meeting. The time that you were sinking into those things to go convince people and whatnot, it's for them to go do the work and you to manage them doing the work and all of that is basically being walked back to, “Hey, I have code that will solve that for you. If you could adopt it, that would be great.” Literally, I'm seeing programs being built by people who know what needs to go into them and that gets converted into something you need to onboard, so it's really migrating towards the – security is migrating to the way of microservices if you ask me.

[0:21:52.0] Guy Podjarny: Yeah. Those are great insights, right? Fundamentally, if you build solutions, you build tools, you're a service provider. You don't need to be peeking behind people's shoulder all the time, which in turn in the form of meetings, or chasing somebody to read the document, will take up your time.

[0:22:10.0] Shannon Lietz: Absolutely.

[0:22:10.9] Guy Podjarny: We're building, like you've got this valley around, like we're evolving. Made all sorts of comments is all about they know, like maybe not quite in fact want it, but it's under evolution in the industry. What would you say today – you talked a little bit about DevSecOps, so if we cling to that term, what would you say are the biggest gaps? More like, what's rolling it out and rolling out the mindset, what areas do you feel we don't know to do yet, or people are especially resistant to?

[0:22:39.4] Shannon Lietz: The stuff that I like to dig into. Over the years, there's lots of insights here. I would say that the biggest aha moment for me, the needle mover that's really starting to fuel people coming closer to a better state is having measurement. All the maturity models are right. It just takes a lot to convince yourself that they're right. I used to love and hate maturity models, because you're always writing so many documents to get to level three.

I keep telling people, why do you need level three when you can get to level four? Which is really measurement. I would say that the DevSecOps thing, along the way the real challenge, like we keep saying culture. What I am finding and it's again, aha moment is it's really about how we talk about security and what it means to our businesses and having some of that business acumen as security practitioners is just missing in our industry.

Now I'm spending a lot more time thinking about the business, if you were. What does it mean to have risk tolerance as an example? What security does actually thought about at the business level? The answer commonly is yes, most companies consider, especially public companies because they are required to report on significant changes in outages, especially if they're going to be materially impacting revenue and things like that. I would say that the business is definitely attuned to the fact that those are happening.

I think the challenge is how do you actually take something that's non-monetary? You have things like fraud and other types of outages. They might be monetary. Some things are non-monetary. As an example, you might have an event that happens, an incident that happens. It takes time to resolve. You may have an investigation that you have to go do to make sure that nothing bad happens, right?

The question is ‘is that something for the books?’ Is it in your risk tolerance thought process? I think that's something that DevSecOps needs to address. I think another couple of DevSecOps things that need to be addressed is where's the market? I mean, we really do need to commoditize. There are not enough capabilities and products out there at a significant level. The science of how you apply them, we just haven't figured out how to really get developers yet into the mix. My belief is that companies that are actually trying to solve the developer problem, being able to adopt, commoditize capabilities and services where you take security knowledge and capability and you package it all up and you make it developer-friendly, so they know where to put it in their CICD pipeline has a significant impact on making their software more resilient and the usage of their software pretty good too.

[0:25:22.9] Guy Podjarny: Amen to that for sure. You and I have both talked a lot. One of the topics we’re excited about over a year is indeed trying to crack the measurement problem. You’ve alluded to a new buzzword, a new framework for us called ‘securability’. Tell us about it.

[0:25:38.6] Shannon Lietz: I am super jazzed about it, because we put a lot of time and effort into sciencing the heck out of security, right? I guess along the way, I used to have other measurements that I thought I can get – if I could just teach a developer how to use this metric, it'll blow their minds and they’ll love security and I'll do something about putting security into that stuff. I guess I changed my mind about the quest and I realized, actually I need to figure out what developers care about, so that I can have them understand what security means to them, so that we can actually get them to address it as part of their process, whatever that might be, whether they're using CICD or they’re hand-jamming their code. I mean, there's a lot of different ways in which software gets built.

Essentially, measuring the resiliency of software from a security point of view is essentially the craft, right? The idea behind a measurement that moves the world forward, I think is in understanding the behavior you want. In my mind, the behavior I want is I want a developer to be able to decide whether or not the security they have for their product is good enough. From my perspective, securability is a 59s measure, because if you're going to do anything, you make it 59s. I mean, I learned along the way. I work for a telco. You learn a lot about 59s and eventually, you get told 59s isn’t enough and you're like, “Are you serious?” I’m just going to go for the 59s. Honestly, if somebody can show me a 59 secured system, I would love it. It would be amazing. I would say so, right? The way we've thought about this is meaningful is that you can utilize securability at a very low level on a single component, a library even and you can also roll it up a little bit at a time, right?

Being able to roll up measures, I think is also significant. That has I think a meaningful piece of the puzzle. From my perspective, securability, big 59s means that it's now something that you don't actually have to teach a developer what 59s means. You've already lowered that intensity of learning, right? Because you're already applying something that they're pretty consistent with.

The question is then, what's the denominator, from my security practitioners perspective? Well we've all wanted to know what the bill of materials was for anything we work on. If you can imagine, CMDB and some other types of systems that are providing resource understanding for you. You know what your attack surface is. There's all kinds of companies out there right now that are trying to tell you what your attack surface is from the outside of a vantage point of an adversary, so that you know, like “hey, that's on the Internet. Did you know that?”

People are like, “Oh, my God. I didn’t know that was on the Internet.” Honestly, I think those are amazing companies, because they're really solving the denominator problem of basically, figuring out what your bill of materials is. Once you figure out what your bill of materials is, then you essentially have the opportunity of figuring out all the known defects that are out there, that could actually have a meaningful impact on your attack surface. As an example, you might have a CBSS10 that's out there. That's going to apply to a handful of your resources maybe, or all of them.

Say you had a million resources with the same CBSS book, that's a bad day, because that's a lot of attackable surface, right? Then the question is so what do you do with that? What’s the numerator on it? The numerator is the escape. I like to say that escapes are a variety of different things. I'll start with just a simple one, which is you got an internal red team, they pone you, they send you a ticket, in our case it's a P0 ticket. You want to basically take that P0 ticket over that splittable surface.

If you only have one on all those different resources, that means hey, you're really firewalling great. You probably have a good zoning and containment. Fantastic. You've got some mitigating controls in place and you're one over a million. I would love to be one in a million, right? That would be amazing. Again, your securability is super high. One in a million awesome.

Let's just say that you had a one-for-one problems. Let's say there's actually only one system out there that has a problem, but it's literally you're going to get an escape one-for-one. You have zero securability. That's a big problem. The question then is once you have that ratio, let's just say you have zero securability against that particular issue, let's just say you have a lot of adversaries that would love to come after you and they are and they're going after that specific resource with that specific attack. You're breached. That's essentially a very simple way of explaining security to somebody who wants to understand it, wants to do the right thing.

I think that resilience capability is super important and exploitability focusing there, understanding how to bring your losses to bear. Companies all the time have fraud against their systems. They have security problems against their systems. The escape of the red team is one aspect, but you might even have losses you've got in your incident capabilities, right? If you can imagine, why aren't you putting your incidents over your exploitable surface, right? If you had 30 incidents in a month and you know they applied to some of your exploitable surface area, your exploitable opportunities, then essentially you had a calculation that said you actually had more risk and your risk was realized, right?

I think that that allows us to have people really take responsibility and be accountable for the security that they're implementing or not implementing, right? It makes it so it's super easy for them to know on the face of it without a lot of interpretation or subjectiveness that they're either doing well there or not.

[0:31:08.9] Guy Podjarny: Do you see securability as a measure that every organization then develops for its own surrounding? You need to add mileage, then look mapped out your security threats, say bill of materials and know more abilities. Something that is very clearly measurable, could also be like whatever, misconfigurations, right? We know buckets left open, or open access points. You do those and then you see the exploits and you see that become new backward calculate. I mean, that's I'm referring to is putting the time to invest in historically understanding the exploit surface you had, the incident, whether full-on groupers, or just forensics abilities and all that that happened on top of that then calculate that? Or do you see it as a standardized, this is how we can measure security. 59s for uptime are –

[0:31:58.6] Shannon Lietz: I think it's all.

[0:32:00.2] Guy Podjarny: They’re a standard metric, right?

[0:32:01.3] Shannon Lietz: I think it should be a standard metric. I think you should have to put your bill of materials into your software, it rolls into a system, you have telemetry based on that bill of materials that helps you to understand your attack surface that you have testing that's going against to help you to monitor it. It should be a real-time system that helps you to understand how you're doing from an LED’s perspective on security and it's measuring your resilience constantly. If adversaries are measuring your resilience too, then it should help you to find those problems as well.

I also think that you should be able to leverage that same methodology to go backwards, looking and figure out like, hey, do we miss something? To your point, could you hand calculate it? Absolutely. It'll be really easy if you have a bill of materials. Then going forward, you should be able to forecast it. What I like to say is that when somebody designs a system, they should be able to understand their bill of materials and where they think that there might be adversary happenings, so I could imagine in the future we're going to find a company out there that's going to say, “Hey, we're monitoring your bill of materials and we actually see adversary interest in these key areas of your bill of materials,” so your likelihood if you have resiliency issues in those areas is very high that it's going to be a problem for you specifically.

I do think the way in which it's been invented is really important about it being specific to your company, but I also think it makes things shareable. If I wanted to share information with another company, I should be able to share the securability information in a reasonable way without necessarily telling somebody all the bits and details of my security program. Hopefully, that's also helping people have the conversation that says, “Hey, yours is 99.9%, but mine's 97% because we don't see the same adversaries as you do and the amount of adversaries that we encounter is much less.”

People are having those risk-based conversations in a meaningful way at a business level, because really, this isn't just the software developers, but it's also to solve for people that have to have those conversations, where you're not talking about hey, you're not doing it the right way. The how isn't the thing of focus anymore. You're actually talking about the why and the what, right?

You're really getting into the business level conversation of what is your measure? Why is that appropriate? If you can build trust on that why and what, because that's where you build trust, you don't build trust on how, you build trust on why and what, then you can actually create a meaningful ecosystem of people who are doing the right thing for the right reasons with the right intent, so that you can establish a much bigger barrier against adversaries.

[0:34:40.9] Guy Podjarny: How do you see – I mean, I think the idea is compelling in the sense, what will aspire to the measure of how secure you are, or securable you are maybe in this term. How do you meld in, I think the bill of materials of the known components, while there are some disagreement in the industry should have some factual elements, or you were using this component who has this known vulnerability. How do you see a custom vulnerability that are also security risks that related fit into this probability in your code, or a misconfiguration?

[0:35:13.7] Shannon Lietz: I love that conversation. It's not a different score. All the same. I'm so tired of us talking about whether it's in the library, outside the library, upside down from the library. Who cares? It is all part of the bill of materials. If you have a configuration, it's part of your bill of materials. You configure it a certain specific way to work with your software package. We really need to focus on the bill of materials standard that says, this is actually if I had to look at your system, rebuild it, whatever it might be, I could actually have information that suggests what risk you took and why.

If you wanted to leave open port 80, I shouldn't have to find it out from some scanner out there in the world. I should know your intention was to leave open port 80, or it was a mistake and you're taking accountability for it. You're having a system that even knows that your intent was this design, so that bill of materials is actually also about your design constraints and your design intent is really important in my mind.

[0:36:09.3] Guy Podjarny: In this model, the more detailed your bill of material, to an extent if you provided more information, you might actually get a lower score. You're not tricking anybody with your own. It's your own system you're trying to do it. The more information in it, the more accurate your score is, whether it's higher or lower. Is that –

[0:36:26.6] Shannon Lietz: That's right. Well and in addition, you benefit from providing a much more accurate bill of materials, because the downside to not doing it is that adversaries actually find it before you do, before your friendly partners do. It would be much better to be accountable for good security than to find out from bad guys. From my perspective, it's only benefit to be able to identify these intents and design, so that you can actually route it out. I think that's about the principles of resilience, right? Is we all want to be resilient.

If we're afraid to actually put this information in because we might be judged by it, I think I would rather be judged by an internal friendly red team adversary than to be judged by an external unfriendly adversary who's going to cause your company to have challenges, right? From my perspective, they're very different.

[0:37:20.1] Guy Podjarny: Yeah. Very well said. Have you been experimenting with the securability within Intuit? Are you using that measure?

[0:37:26.6] Shannon Lietz: Yeah, absolutely. We've been working with it directly for about a year and a half, and so we've got lots of information data. We've done a lot of work with it. I would say in the initial states of doing anything different than the rest of what everybody else does, your why is so important. Honestly, I started looking around in the industry and I questioned a lot of the things that were out there, because they just weren't solving some of the problems.

I believe securability will eventually lead to the capability of us all automating it and even making systems be able to do self-resilience. If you have a good intent and you can do resilience measurement, eventually we might be able to automate risk most of the time, right? Automating risk and complexity, I think is a right thing to actually chase. I was looking at most of the things that were out there, most of the frameworks and there's nothing to say that they're bad, because I actually think most frameworks are pretty awesome that somebody even tried it in the first place.

I don't see anything that's really solving for that notion of automating this, so that it can actually be done by a system and it can be something that can be a support system for your developers. From my perspective, that was the why. I think at Intuit, we've done a job to basically try to always be better than we were last year at everything that we do. That's a wonderful aspiration and I love the mission.

From my perspective, securability has become a thing. Is it in its final states where we fully mature on it? No, we're not. I am definitely interested in the things that we have ahead of us, because securability is worth it. I think that solving for these problems, there are no small feat because just like DevSecOps, what securability is missing right now is the companies that are going to help create it, change it, commoditize it, make it easy to digest, make it consumable.

If you look at the availability market, that's what securability could be for our industry is you look at the billions of dollars that have been generated by monitoring and availability capabilities that are out there and there's a real market opportunity to be had around trying to bring this to bear for our developers.

[0:39:30.9] Guy Podjarny: Yeah. I love the idea. We talk about its effect on [inaudible 0:39:34.0] more measuring security, because it is about capturing the full more than security, but also specifically security related information, from configuration, to dependencies, to known flaws, to various other elements within this bill of materials that moves around. Then you're able to layer on top of that all the known attack surface, security flaws that you have.

Then once you do those and you measure it, because DevSecOps follow through the opposite of that. One of the core principle is you can’t measure it. If it moves, measure it. If it doesn't move, measure it in case it moves, right?

[0:40:16.0] Shannon Lietz: That's right.

[0:40:17.4] Guy Podjarny: Doing with that and not doing it in the world of security. Would definitely be keen to see it evolve and definitely build there on our end.

[0:40:27.1] Shannon Lietz: I'd love that.

[0:40:28.6] Guy Podjarny: I think this is – we can probably go on here for –

[0:40:32.3] Shannon Lietz: For hours, probably.

[0:40:34.3] Guy Podjarny: An hour longer, but I think we're probably a little bit over at already in time. Before I let you go, Shannon, I like to ask every guest that comes on – anyway, you’ve already given a whole bunch of advice, but ask for one more bit, which is if you have one smaller bit of advice that you can give a team looking to level up their security foo, what would that bit of advice be?

[0:40:56.3] Shannon Lietz: Yeah. Somebody who’s looking at, but to look up their security skills and try and up-level, I would say the one question you should ask yourself is how many adversaries does my application have? Because it's the curiosity around that question that will lead you to better places. That I think just having that goal of trying to solve that question will lead you down to find people that you can contribute to, or collaborate with that will help you answer that question.

I think once you do answer that question, it's mind-blowingly obvious what you have to do to fix the problems that might actually be in your applications and in some of the code that you are writing.

[0:41:35.6] Guy Podjarny: Very cool. Well, definitely sound advice focus. Shannon, this has been excellent. Thanks a lot for coming on the show.

[0:41:43.3] Shannon Lietz: Thank you.

[END OF INTERVIEW]

Partager

[0:01:27.9] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning in. Today, we have really maybe one of the originators, the pioneers of DevSecOps with us and really a bright security mind in Shannon Lietz from Intuit. Thank for coming out to the show, Shannon.

[0:01:42.2] Shannon Lietz: Super excited to be here. I love this show.

[0:01:46.4] Guy Podjarny: Shannon, we have a whole bunch of topics to cover. Before we dig in, tell us a little bit about yourself. What is it you do? How you got into security?

[0:01:53.5] Shannon Lietz: Awesome. Yeah, I've been in this industry for over 30 years and that makes me a dinosaur, as I always say. I feel the placement journey on an ad is to really try and help the industry and take some of the lessons I've learned over that long career and really try to make a change. My goal at this point is really to make a dent in the security problem as a goal for my life and my career.

As part of it, I got into this basically with lots of curiosity and didn't even realize it was a mostly male journey. Nobody told me when I decided that computers were fun. I learned through lots of hard knocks, but basically this wasn't a path carved out for women. I thought, “You know what? The heck with it. I always do things that people tell me I shouldn't be doing.” I started out with computers at a really young age and eventually, learned how to do some really neat things that again, shouldn't have been done.

At the time, they called it hacking. I thought, “Well, you know what? I want to be a hacker, so cool.” Then eventually, it became illegal and I was like, “Okay, that's not a job.” My dad was horrified by the fact that this could be a problem. Eventually, it turned into actually it was a job. You just had to do it a certain way. That was the beginning. I mean, when I started in computers, nothing was really illegal per se. The Computer Fraud and Abuse Act was interesting and that shaped some of this industry.

Along the way, there's lots of trials and tribulations. Yeah, I started there and I've been a developer, so I've written code. I'm so sorry to anybody who's still maintaining my code, God forbid. Then as you look back on 30 years, you’re like, “Wow, I could have done a lot of better things.”

Then I got into the security and I've even done ops. I always said that if I needed to make money and pay my bills that I would ops for food, and so I ops for food. Then eventually, I smooshed it all together and created a term that some love and some hate and whether – here we are.

[0:03:50.9] Guy Podjarny: Yeah. Definitely has become the terminology of choice, the depth of the – we had a rugged DevOps, we had also some variance, but it's very clear that DevSecOps is the term that emerged.

[0:04:02.0] Shannon Lietz: That's cool, because I've got a new one coming.

[0:04:06.0] Guy Podjarny: We’ve got some great further pioneering here to air on the show. Just a little bit from a companies and industries’ experience and so we don’t completely jumped around, like a whole bunch of things. I think right now, you are at Intuit, right? Before that, you were at ServiceNow?

[0:04:23.9] Shannon Lietz: I was. I was at that wonderful other cloud company. I like cloud companies as they seem to be fun. I was also at Sony before that. I mean, my track record is pretty much financial. I did telco work. I mean, I've had about 22 companies that worked for in this period. I've been at Intuit now for almost eight years, which is the longest job I've ever had.

[0:04:44.3] Guy Podjarny: Yeah. Well, definitely changing the streak here. What is it you do at Intuit?

[0:04:47.9] Shannon Lietz: I run the red team here at Intuit. It's relatively large. I would say it's an adversary management practice. A lot of people think of red team as something that's relatively surprising. We put a lot of science behind our red team capabilities. We've really been working on moving it forward to adversary management and trying to make it so that we make this more scientific. I'm into a lot of math and science around trying to artfully measure the things that we all want to do, which is to make software more rugged and to make things more resilient, so that we can do the things we love, which is solve human problems.

[0:05:21.6] Guy Podjarny: When you talk about red team, what's the – I like to geek out a little bit about org structure.

[0:05:25.6] Shannon Lietz: Totally.

[0:05:26.2] Guy Podjarny: What does that red team get divided into?

[0:05:28.6] Shannon Lietz: I got to measure my red team recently to find out how many headcount I had. I was pretty surprised. We have about 53 people and we also just started a part of the red team in Israel. I've got four more people there that are doing red team. Actually, we've been pushing the bounds. We're applying more to application security and also, business logic issues. That's neat. I think that we're always the willing participants to emerge and try to innovate in a lot of different security spaces. I'm excited to see how that really advances us.

My org structure, I have mixed threat Intel with our red teamers. Also, we have this other group that basically runs a continuous exploit system. Essentially, we built containers to essentially exploit all the things that we worry about, so people can feel pretty comfortable that things are going 24 by 7. Internally, yeah.

[0:06:27.5] Guy Podjarny: Are these a known set of attacks? Is it more regression-minded, or is it more almost bug bounty? Like something that –

[0:06:34.3] Shannon Lietz: Yes. I say that way, because it's a mix of a lot of things. Anything that could cause us to have a security escape is something that we put into this engine. The way that I tell it is if you can conceive of it and it could be automated, it should go onto our platform and that platform basically runs it across our attack surface, for our production attack surface, our internal attack surface.

Not everything yet that I'd love to see it do but eventually my feeling is that that platform becomes really the way for us to continually level up against some of the exploitable surface. I think it's the way in which most companies are going to need to go and I think it's the path forward, is to really figure out how to do full resilience and regression testing against your attack surface for both the things that you know and the things that you learn and pull that information in to essentially get there before the adversaries do.

The big mission is get ahead and stay ahead of adversaries and understand your adversaries for your applications. I think that people design their software, but they don't think about what the potential anti-cases are, or thinking about – I always say that security is basically a developer's edge case.  

The security edge case is really important, but a lot of times people don't have time for it. My job in my mind is to make it faster for people to think about the edge case that it’s going to give an adversary an advantage, to allow business to do what it needs to do and figure out where the risks are to help mitigate those risks, so that we can do the things that help solve customer problems. Instead of - everybody's been talking the road to no. I got to tell you, early in my career, I was the road to no. Everybody would rat around me. It was the coolest thing. I was definitely that little picture with the little house in the middle, or the little guard shack in the middle and the gates and the snow.

I love that one, because it's always a reminder to me. I actually framed a copy for myself, so I could keep myself humble. Because the people that now I feel we support and subscribe to the things that we do to help them, they're coming, they're asking and that behavioral change was something that had to start in us first, in me first and then basically extends out to everybody that you touch and help.

I think that being meaningful in their lives to try and help change how people think about things is actually the journey forward for security. For me, this adversary management capability has extended into things like we're using AI, ML. Now my team fully codes. When I first started this, I remember I have this really cool little – with DevSecOps, I have this really cool little presentation and I framed that for myself, because we do these things to remind yourself of where you came from.

It had a snail and a hare in it and a little lane that I developed. I was trying to explain basically, this was the path to go faster in a secure way and a safer way. I'll never forget that, because I delivered it here in San Diego to the ISSA. It was a small room of about 30 or 40 people who had never heard of what DevSecOps was and they were like, “This lady's crazy.” I think it's been eight years since that talk. It feels like it just flew by and there's so many people now that you hear are starting to see more security in software that their products and services are getting better. Is it perfect? No. Have we taken a significant dent out of the stuff that was out there at one point? I think the answer is yes.

I just saw some metrics from github about how the fact that they have vulnerabilities showing up and 20% of all the vulnerabilities that are showing up, they basically have seen that they're getting closed. That's in no small part to a lot of companies that are out there that are providing that information to developers, so that they know about these things, that they're not having to go figure it out on their own.

I mean, for the companies I've worked for where that wasn't available, developers are like, “What should I worry about?” We're like, “Oh, we just need to go get CBSS for you and here's a set of a spreadsheet. Go figure it out for yourself, dude. Thanks.” I think that was a serious problem, because it inhibited their ability to develop safe software, because they didn't have the time to go figure out and crunch the spreadsheets. I mean, let's all be honest. That's basically a full-time job for a security practitioner. Something has to be able to build the software, so you can do something with it. From my perspective, there's a lot that goes into this.

[0:11:00.9] Guy Podjarny: There's a bunch to unpack. I want to make sure I was taking you like a bunch of a subsequent questions. Let me backtrack a little bit. First on the – I love that notion of that continue effect. Yeah, this is something to use exploits of the elements. I've often thought about bag boundaries and the likes as I almost like the continuous monitoring, or chronic monitoring, or telling you if something is wrong.

I think this type of internal system makes a lot of sense to ensure that the questions you know to ask at least alongside that the red teaming and the creativity get continuously asked and you don't go back and that you can go buy that at scale. How do you maintain that? Do you take feeds in from basically the botnets out there? Is it more about fixes or problems that before they have already seen in your surroundings? What would you say are rough primary ingredients of the types of attacks that get run?

[0:11:55.1] Shannon Lietz: Oh, gosh. We take in everything. If there's no data set, I turn away, because honestly, there's always nuggets in everything. They always tell you like, no two scanners actually scan the same things. They never scan them alike the same way. I think people are really creative about how they test for security problems, so we take in any bit of data we can get. We've taken in stuff from a variety of product vendors who do scanning. We're looking at the build materials, companies all the stuff we can get from them. Anybody who is basically asserting a CVSS score, a CPE score, a score of any type that would actually reflect a vulnerability of a significant risk. All of those things are useful.

To me, they're lagging indicators, however. The other things that we take in is threat intel. We're constantly looking for vendors and providers that have information that can help us get ahead. Why not be able to find the zero day, or what about signatures that are actually written against your company specifically? Why not harvest those, use them, learn from them and then replay them against your systems? Because essentially, that's a really great way to be able to build up your catalog of how to make yourself harder to beat from a resilience perspective. That took a lot of years to learn.

I will tell you this is not, “Hey, by the way, this is what we're going to do,” eight years ago. It was a lot of trials and tribulations and my little sign on the back wall here that basically says, “Bang your head here.” It's been banged a lot of times. I mean, hey.

[0:13:18.6] Guy Podjarny: You take all that information and then your team you said, codes. And builds it like this is an operational system that runs and runs those experts against production, or more against staging [inaudible 0:13:28.9]? How do you handle that?

[0:13:30.4] Shannon Lietz: What is production? I mean, that’s really cool. We got we got rid of that in what? 1980? No, I'm just kidding. Production to me is everything. Nowadays, even development systems are production, right? Even a lot of these capabilities that are out there, they're significant in a way that you'd think. Pretty much at this point, if your developers are down and the productivity is lacking, aren't you down essentially?

[0:13:57.2] Guy Podjarny: Absolutely. I love the approach. All these systems are production and they're all impactful to the systems. Oftentimes, the one of the concerns that happens when you run these continuous tests against the customer for saying, we’re moving to more production –

[0:14:11.5] Shannon Lietz: Production?

[0:14:12.4] Guy Podjarny: When you run it, there's always this fear of, “Hey, you're going to mess something up.” [Inaudible 0:14:16.2].

[0:14:17.1] Shannon Lietz: Don't take production down. It's the one rule, right? That one rule. Don't take production down, which is why you've got to think about everything as production. If you delineate that this system is okay, but that system is not okay, to me you miss the major principle, which is if you're going to do resilience testing, you need to be mindful of the things that you're testing. You need to test your tests, right? That's a thing.

You need to be able to build your tests in a meaningful way, not just throwing garbage at a system, but throwing something that's precision-oriented, that you're looking for a specific issue and that you're actually harvesting that issue as an escape. Not that you're poking around at it in a way that actually doesn't really provide that precision. My mindset about testing and production and resilience testing is that major principle. Everybody's always said like, “What are your rules for your team?” I'm like, “I have one rule. Don't take production down.” Because honestly, that's actually a meaningful issue for most companies, especially ones that are in the software industry.

I think the second piece of this puzzle for us is build enough capability in your teams to understand what's a good test, what's not a good test, have that scientific set of principles about how you actually develop those tests to be able to make it so that they work in your organization. That's essentially why I think – I'd love to say that eventually, this trade craft will be able to be moved into the teams, that's possible and I think that as we commoditize in the industry that these tests that you could run are actually being built by external companies and there's ways to actually create them and they can be tuned and tweaked and developers could run them.

I think it absolutely is possible for us to get to true DevSecOps, which is a developer can build safe software, operate it and they can eventually continually secure it and have it resilient against attackers. I eventually think that that is possible for an individual to be able to do those things, but not without assistance. It's not without buying specialty capabilities. We have to as a industry in my mind, be able to create that Nirvana, so that we're not also burdening people.

What I would say right now is if you look at some of the surveys that have come out, the DevOps, DevSecOps surveys about burnout and some of those things, well, the problem - and I did a huge study on this - is we're not seeing enough investment in small businesses that are trying to solve the commoditization of security in the way that it's actually going to be meaningful. Because I'm not sure that people really grok the full problem space of making it so that developers could leverage these services and capabilities, so that they can do the work of integrating it, but they don't necessarily have to invent and understand every facet of it, so that they're the expert practitioner.

Because, I just think that's what the difference is between having a security team that's off to the side that does it for people and having it be something that somebody can fully integrate into their workload.

[0:17:19.4] Guy Podjarny: Yeah, absolutely. I love also, so you mentioned about how your team now codes and that was actually one of the other bits that really – how have you seen that skill set? This is definitely a forward-thinking approach and I see a lot of the guests in the show talk about how their teams today code. How have you seen the evolution there? What were some of the –again, you've been touting DevSecOps for a while. What was your timeline and your views of changing that skillset? Which skills do you feel are needed less, if you're assuming you don't just want to increasingly perfect individuals on the team to build –

[0:17:55.1] Shannon Lietz: How do you trade it?

[0:17:56.0] Guy Podjarny: Sacrifice more coding skills today.

[0:17:59.7] Shannon Lietz: Yeah, exactly. How do you trade the workload of today for the workload of tomorrow? It’s definitely a challenge. I think when I first got started, I probably trivialized it a little bit, because I already had some coding skills so I was rebranding it to myself and realizing it's important in my life.

At the time, as a oversight on my part to be so cavalier about it being less than difficult, because I think it is a difficult practice to be a developer. I think there's so many things to consider, like you're not just code slinging if you were. You're actually looking at the human problem and trying to find an elegant solution that can be easy for people to really embrace. You're lowering the complexity for them, right?

When we first got started, I think it was like, well, Ruby's easy enough. Let's all do Ruby. There were some definite opinions about whether we would do Ruby or all the other languages of choice. Frankly –

[0:18:55.3] Guy Podjarny: There hasn’t been [inaudible 0:18:56.2] languages.

[0:18:57.7] Shannon Lietz: No, never. There’s never opinion in the bunch for that at all. I had a few people who could write some Ruby code and I had some people who do Java and this, that, the other thing. I think Ruby ultimately, because Metasploit was on Ruby and well, a bunch of people have done modules and things like that. It was just easier that way. There's definitely a lot of hacking tools that started out in Ruby that's migrated to different languages.

Some of my team now does Python. We've definitely gone after different languages along the way. Some folks are doing Go. Everything has its place. When we first got started, it was easier for us to all go together on one language that was going to help level everybody up. Meaning, it was easy enough, it wasn't necessarily a compiled language. You didn't have to get onto all the harder stuff. We started with what I would consider an easier language to address. Some might actually find that to be different, right? They might say, “Hey, Ruby's not that easy.”

I'll say that that was just a choice that we made together. It started with only a few people and obviously, now most of my team that codes. I can't even think of one person on the team at this point that doesn't code. If a manager has to do something quite often, they're breaking open a SQL query at the least to go run even a report as an example.

Even the managers are finding themselves having to code. They're putting things together, snapping in APIs. That was a big thing now. The question is what do you really trade off? I would say and I'm going to say it, because I think it's really what does get traded off. I think your code migrates into from policy into code, and so you're not writing as many documents, frankly. I think that code that's well documented is really a wonderful thing. I don't think enough people put enough comments in their code at this point. I read code all the time and I'm like, “Could you just comment a little bit more? I don't know why you made that choice.”

[0:20:48.9] Guy Podjarny: [Inaudible 0:20:48.11].

[0:20:49.9] Shannon Lietz: No opinions. No strong opinions at all. Over-commented code is also a disaster, so I know. I would say where the industry seems to be heading is we're lightening up on documentation. There's reams of paper that are being saved and trees across the world that have been released from the horrible death of paper policies. I think that's actually where some of it's coming from.

I also think that the other thing that is fueling the ability to migrate from one to the other is there's not as many meetings. It used to be that security was a meeting after a meeting after a meeting. The time that you were sinking into those things to go convince people and whatnot, it's for them to go do the work and you to manage them doing the work and all of that is basically being walked back to, “Hey, I have code that will solve that for you. If you could adopt it, that would be great.” Literally, I'm seeing programs being built by people who know what needs to go into them and that gets converted into something you need to onboard, so it's really migrating towards the – security is migrating to the way of microservices if you ask me.

[0:21:52.0] Guy Podjarny: Yeah. Those are great insights, right? Fundamentally, if you build solutions, you build tools, you're a service provider. You don't need to be peeking behind people's shoulder all the time, which in turn in the form of meetings, or chasing somebody to read the document, will take up your time.

[0:22:10.0] Shannon Lietz: Absolutely.

[0:22:10.9] Guy Podjarny: We're building, like you've got this valley around, like we're evolving. Made all sorts of comments is all about they know, like maybe not quite in fact want it, but it's under evolution in the industry. What would you say today – you talked a little bit about DevSecOps, so if we cling to that term, what would you say are the biggest gaps? More like, what's rolling it out and rolling out the mindset, what areas do you feel we don't know to do yet, or people are especially resistant to?

[0:22:39.4] Shannon Lietz: The stuff that I like to dig into. Over the years, there's lots of insights here. I would say that the biggest aha moment for me, the needle mover that's really starting to fuel people coming closer to a better state is having measurement. All the maturity models are right. It just takes a lot to convince yourself that they're right. I used to love and hate maturity models, because you're always writing so many documents to get to level three.

I keep telling people, why do you need level three when you can get to level four? Which is really measurement. I would say that the DevSecOps thing, along the way the real challenge, like we keep saying culture. What I am finding and it's again, aha moment is it's really about how we talk about security and what it means to our businesses and having some of that business acumen as security practitioners is just missing in our industry.

Now I'm spending a lot more time thinking about the business, if you were. What does it mean to have risk tolerance as an example? What security does actually thought about at the business level? The answer commonly is yes, most companies consider, especially public companies because they are required to report on significant changes in outages, especially if they're going to be materially impacting revenue and things like that. I would say that the business is definitely attuned to the fact that those are happening.

I think the challenge is how do you actually take something that's non-monetary? You have things like fraud and other types of outages. They might be monetary. Some things are non-monetary. As an example, you might have an event that happens, an incident that happens. It takes time to resolve. You may have an investigation that you have to go do to make sure that nothing bad happens, right?

The question is ‘is that something for the books?’ Is it in your risk tolerance thought process? I think that's something that DevSecOps needs to address. I think another couple of DevSecOps things that need to be addressed is where's the market? I mean, we really do need to commoditize. There are not enough capabilities and products out there at a significant level. The science of how you apply them, we just haven't figured out how to really get developers yet into the mix. My belief is that companies that are actually trying to solve the developer problem, being able to adopt, commoditize capabilities and services where you take security knowledge and capability and you package it all up and you make it developer-friendly, so they know where to put it in their CICD pipeline has a significant impact on making their software more resilient and the usage of their software pretty good too.

[0:25:22.9] Guy Podjarny: Amen to that for sure. You and I have both talked a lot. One of the topics we’re excited about over a year is indeed trying to crack the measurement problem. You’ve alluded to a new buzzword, a new framework for us called ‘securability’. Tell us about it.

[0:25:38.6] Shannon Lietz: I am super jazzed about it, because we put a lot of time and effort into sciencing the heck out of security, right? I guess along the way, I used to have other measurements that I thought I can get – if I could just teach a developer how to use this metric, it'll blow their minds and they’ll love security and I'll do something about putting security into that stuff. I guess I changed my mind about the quest and I realized, actually I need to figure out what developers care about, so that I can have them understand what security means to them, so that we can actually get them to address it as part of their process, whatever that might be, whether they're using CICD or they’re hand-jamming their code. I mean, there's a lot of different ways in which software gets built.

Essentially, measuring the resiliency of software from a security point of view is essentially the craft, right? The idea behind a measurement that moves the world forward, I think is in understanding the behavior you want. In my mind, the behavior I want is I want a developer to be able to decide whether or not the security they have for their product is good enough. From my perspective, securability is a 59s measure, because if you're going to do anything, you make it 59s. I mean, I learned along the way. I work for a telco. You learn a lot about 59s and eventually, you get told 59s isn’t enough and you're like, “Are you serious?” I’m just going to go for the 59s. Honestly, if somebody can show me a 59 secured system, I would love it. It would be amazing. I would say so, right? The way we've thought about this is meaningful is that you can utilize securability at a very low level on a single component, a library even and you can also roll it up a little bit at a time, right?

Being able to roll up measures, I think is also significant. That has I think a meaningful piece of the puzzle. From my perspective, securability, big 59s means that it's now something that you don't actually have to teach a developer what 59s means. You've already lowered that intensity of learning, right? Because you're already applying something that they're pretty consistent with.

The question is then, what's the denominator, from my security practitioners perspective? Well we've all wanted to know what the bill of materials was for anything we work on. If you can imagine, CMDB and some other types of systems that are providing resource understanding for you. You know what your attack surface is. There's all kinds of companies out there right now that are trying to tell you what your attack surface is from the outside of a vantage point of an adversary, so that you know, like “hey, that's on the Internet. Did you know that?”

People are like, “Oh, my God. I didn’t know that was on the Internet.” Honestly, I think those are amazing companies, because they're really solving the denominator problem of basically, figuring out what your bill of materials is. Once you figure out what your bill of materials is, then you essentially have the opportunity of figuring out all the known defects that are out there, that could actually have a meaningful impact on your attack surface. As an example, you might have a CBSS10 that's out there. That's going to apply to a handful of your resources maybe, or all of them.

Say you had a million resources with the same CBSS book, that's a bad day, because that's a lot of attackable surface, right? Then the question is so what do you do with that? What’s the numerator on it? The numerator is the escape. I like to say that escapes are a variety of different things. I'll start with just a simple one, which is you got an internal red team, they pone you, they send you a ticket, in our case it's a P0 ticket. You want to basically take that P0 ticket over that splittable surface.

If you only have one on all those different resources, that means hey, you're really firewalling great. You probably have a good zoning and containment. Fantastic. You've got some mitigating controls in place and you're one over a million. I would love to be one in a million, right? That would be amazing. Again, your securability is super high. One in a million awesome.

Let's just say that you had a one-for-one problems. Let's say there's actually only one system out there that has a problem, but it's literally you're going to get an escape one-for-one. You have zero securability. That's a big problem. The question then is once you have that ratio, let's just say you have zero securability against that particular issue, let's just say you have a lot of adversaries that would love to come after you and they are and they're going after that specific resource with that specific attack. You're breached. That's essentially a very simple way of explaining security to somebody who wants to understand it, wants to do the right thing.

I think that resilience capability is super important and exploitability focusing there, understanding how to bring your losses to bear. Companies all the time have fraud against their systems. They have security problems against their systems. The escape of the red team is one aspect, but you might even have losses you've got in your incident capabilities, right? If you can imagine, why aren't you putting your incidents over your exploitable surface, right? If you had 30 incidents in a month and you know they applied to some of your exploitable surface area, your exploitable opportunities, then essentially you had a calculation that said you actually had more risk and your risk was realized, right?

I think that that allows us to have people really take responsibility and be accountable for the security that they're implementing or not implementing, right? It makes it so it's super easy for them to know on the face of it without a lot of interpretation or subjectiveness that they're either doing well there or not.

[0:31:08.9] Guy Podjarny: Do you see securability as a measure that every organization then develops for its own surrounding? You need to add mileage, then look mapped out your security threats, say bill of materials and know more abilities. Something that is very clearly measurable, could also be like whatever, misconfigurations, right? We know buckets left open, or open access points. You do those and then you see the exploits and you see that become new backward calculate. I mean, that's I'm referring to is putting the time to invest in historically understanding the exploit surface you had, the incident, whether full-on groupers, or just forensics abilities and all that that happened on top of that then calculate that? Or do you see it as a standardized, this is how we can measure security. 59s for uptime are –

[0:31:58.6] Shannon Lietz: I think it's all.

[0:32:00.2] Guy Podjarny: They’re a standard metric, right?

[0:32:01.3] Shannon Lietz: I think it should be a standard metric. I think you should have to put your bill of materials into your software, it rolls into a system, you have telemetry based on that bill of materials that helps you to understand your attack surface that you have testing that's going against to help you to monitor it. It should be a real-time system that helps you to understand how you're doing from an LED’s perspective on security and it's measuring your resilience constantly. If adversaries are measuring your resilience too, then it should help you to find those problems as well.

I also think that you should be able to leverage that same methodology to go backwards, looking and figure out like, hey, do we miss something? To your point, could you hand calculate it? Absolutely. It'll be really easy if you have a bill of materials. Then going forward, you should be able to forecast it. What I like to say is that when somebody designs a system, they should be able to understand their bill of materials and where they think that there might be adversary happenings, so I could imagine in the future we're going to find a company out there that's going to say, “Hey, we're monitoring your bill of materials and we actually see adversary interest in these key areas of your bill of materials,” so your likelihood if you have resiliency issues in those areas is very high that it's going to be a problem for you specifically.

I do think the way in which it's been invented is really important about it being specific to your company, but I also think it makes things shareable. If I wanted to share information with another company, I should be able to share the securability information in a reasonable way without necessarily telling somebody all the bits and details of my security program. Hopefully, that's also helping people have the conversation that says, “Hey, yours is 99.9%, but mine's 97% because we don't see the same adversaries as you do and the amount of adversaries that we encounter is much less.”

People are having those risk-based conversations in a meaningful way at a business level, because really, this isn't just the software developers, but it's also to solve for people that have to have those conversations, where you're not talking about hey, you're not doing it the right way. The how isn't the thing of focus anymore. You're actually talking about the why and the what, right?

You're really getting into the business level conversation of what is your measure? Why is that appropriate? If you can build trust on that why and what, because that's where you build trust, you don't build trust on how, you build trust on why and what, then you can actually create a meaningful ecosystem of people who are doing the right thing for the right reasons with the right intent, so that you can establish a much bigger barrier against adversaries.

[0:34:40.9] Guy Podjarny: How do you see – I mean, I think the idea is compelling in the sense, what will aspire to the measure of how secure you are, or securable you are maybe in this term. How do you meld in, I think the bill of materials of the known components, while there are some disagreement in the industry should have some factual elements, or you were using this component who has this known vulnerability. How do you see a custom vulnerability that are also security risks that related fit into this probability in your code, or a misconfiguration?

[0:35:13.7] Shannon Lietz: I love that conversation. It's not a different score. All the same. I'm so tired of us talking about whether it's in the library, outside the library, upside down from the library. Who cares? It is all part of the bill of materials. If you have a configuration, it's part of your bill of materials. You configure it a certain specific way to work with your software package. We really need to focus on the bill of materials standard that says, this is actually if I had to look at your system, rebuild it, whatever it might be, I could actually have information that suggests what risk you took and why.

If you wanted to leave open port 80, I shouldn't have to find it out from some scanner out there in the world. I should know your intention was to leave open port 80, or it was a mistake and you're taking accountability for it. You're having a system that even knows that your intent was this design, so that bill of materials is actually also about your design constraints and your design intent is really important in my mind.

[0:36:09.3] Guy Podjarny: In this model, the more detailed your bill of material, to an extent if you provided more information, you might actually get a lower score. You're not tricking anybody with your own. It's your own system you're trying to do it. The more information in it, the more accurate your score is, whether it's higher or lower. Is that –

[0:36:26.6] Shannon Lietz: That's right. Well and in addition, you benefit from providing a much more accurate bill of materials, because the downside to not doing it is that adversaries actually find it before you do, before your friendly partners do. It would be much better to be accountable for good security than to find out from bad guys. From my perspective, it's only benefit to be able to identify these intents and design, so that you can actually route it out. I think that's about the principles of resilience, right? Is we all want to be resilient.

If we're afraid to actually put this information in because we might be judged by it, I think I would rather be judged by an internal friendly red team adversary than to be judged by an external unfriendly adversary who's going to cause your company to have challenges, right? From my perspective, they're very different.

[0:37:20.1] Guy Podjarny: Yeah. Very well said. Have you been experimenting with the securability within Intuit? Are you using that measure?

[0:37:26.6] Shannon Lietz: Yeah, absolutely. We've been working with it directly for about a year and a half, and so we've got lots of information data. We've done a lot of work with it. I would say in the initial states of doing anything different than the rest of what everybody else does, your why is so important. Honestly, I started looking around in the industry and I questioned a lot of the things that were out there, because they just weren't solving some of the problems.

I believe securability will eventually lead to the capability of us all automating it and even making systems be able to do self-resilience. If you have a good intent and you can do resilience measurement, eventually we might be able to automate risk most of the time, right? Automating risk and complexity, I think is a right thing to actually chase. I was looking at most of the things that were out there, most of the frameworks and there's nothing to say that they're bad, because I actually think most frameworks are pretty awesome that somebody even tried it in the first place.

I don't see anything that's really solving for that notion of automating this, so that it can actually be done by a system and it can be something that can be a support system for your developers. From my perspective, that was the why. I think at Intuit, we've done a job to basically try to always be better than we were last year at everything that we do. That's a wonderful aspiration and I love the mission.

From my perspective, securability has become a thing. Is it in its final states where we fully mature on it? No, we're not. I am definitely interested in the things that we have ahead of us, because securability is worth it. I think that solving for these problems, there are no small feat because just like DevSecOps, what securability is missing right now is the companies that are going to help create it, change it, commoditize it, make it easy to digest, make it consumable.

If you look at the availability market, that's what securability could be for our industry is you look at the billions of dollars that have been generated by monitoring and availability capabilities that are out there and there's a real market opportunity to be had around trying to bring this to bear for our developers.

[0:39:30.9] Guy Podjarny: Yeah. I love the idea. We talk about its effect on [inaudible 0:39:34.0] more measuring security, because it is about capturing the full more than security, but also specifically security related information, from configuration, to dependencies, to known flaws, to various other elements within this bill of materials that moves around. Then you're able to layer on top of that all the known attack surface, security flaws that you have.

Then once you do those and you measure it, because DevSecOps follow through the opposite of that. One of the core principle is you can’t measure it. If it moves, measure it. If it doesn't move, measure it in case it moves, right?

[0:40:16.0] Shannon Lietz: That's right.

[0:40:17.4] Guy Podjarny: Doing with that and not doing it in the world of security. Would definitely be keen to see it evolve and definitely build there on our end.

[0:40:27.1] Shannon Lietz: I'd love that.

[0:40:28.6] Guy Podjarny: I think this is – we can probably go on here for –

[0:40:32.3] Shannon Lietz: For hours, probably.

[0:40:34.3] Guy Podjarny: An hour longer, but I think we're probably a little bit over at already in time. Before I let you go, Shannon, I like to ask every guest that comes on – anyway, you’ve already given a whole bunch of advice, but ask for one more bit, which is if you have one smaller bit of advice that you can give a team looking to level up their security foo, what would that bit of advice be?

[0:40:56.3] Shannon Lietz: Yeah. Somebody who’s looking at, but to look up their security skills and try and up-level, I would say the one question you should ask yourself is how many adversaries does my application have? Because it's the curiosity around that question that will lead you to better places. That I think just having that goal of trying to solve that question will lead you down to find people that you can contribute to, or collaborate with that will help you answer that question.

I think once you do answer that question, it's mind-blowingly obvious what you have to do to fix the problems that might actually be in your applications and in some of the code that you are writing.

[0:41:35.6] Guy Podjarny: Very cool. Well, definitely sound advice focus. Shannon, this has been excellent. Thanks a lot for coming on the show.

[0:41:43.3] Shannon Lietz: Thank you.

[END OF INTERVIEW]

Up next

A Broader Cultural Perspective Of Cybersecurity And Digital Transformations With Steve White

Episode 59

A Broader Cultural Perspective Of Cybersecurity And Digital Transformations With Steve White

View episode
Navigating The Terrain Of Shared Responsibility With Iftach Ian Amit

Episode 60

Navigating The Terrain Of Shared Responsibility With Iftach Ian Amit

View episode
The Rise Of HTTPS And Front-End Security Toolbox With Scott Helme

Episode 61

The Rise Of HTTPS And Front-End Security Toolbox With Scott Helme

View episode
Career Shifts And Holistically Managing Security Transitions With Dr. Wendy Ng

Episode 62

Career Shifts And Holistically Managing Security Transitions With Dr. Wendy Ng

View episode
Container Security, Microservices, And Chaos Engineering With Kelly Shortridge

Episode 63

Container Security, Microservices, And Chaos Engineering With Kelly Shortridge

View episode