Episode 5

Season 1, Episode 5

Continuous Security At Chef With Adam Jacob

Guests:
Adam Jacob
Listen on Apple PodcastsListen on Spotify Podcasts

In the fifth installment of The Secure Developer, Guy talks with Chef CTO Adam Jacob about the role security can play in DevOps and continuous integration/deployment. They cover the differences between baked-in and bolted on security and how automation with Habitat can change the way developers approach secure coding.

The post Ep. #5, Continuous Security at Chef appeared first on Heavybit.

Partager

"Guy Podjarny: For security, you would want security controls as part of the continuous delivery aspect of it so that they're always there."

"Adam Jacob: Security and the validation of that compliance is part of the process by which software goes into production. The difficulty of getting a language into CI is a huge burden in the adoption of a new language. The goal for a software developer, I think, is to get closer to those security engineers and to get closer to those operations people so that they better understand how the software they build can support that piece of their software's life cycle."

[00:00:37] Guy Podjarny: Hi. I'm Guy Podjarny, CEO and Co-Founder of Snyk. And you're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. 

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybits.com. 

If you're interested in being a guest on this show or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev. 

[INTERVIEW]

[00:01:08] Guy Podjarny: Hello, everybody. And welcome back to The Secure Developer. And today, we have an awesome guest on the show, Adam Jacob from Chef. Adam, thanks for joining us.

[00:01:15] Adam Jacob: Hi.

[00:01:16] Guy Podjarny: And we'll talk about various sort of cool topics we teed up that I think are interesting with the world of security in the world of DevOps and CI/CD.

And some very interesting new package management, build system capabilities coming out of Chef that I think are very relevant to the security play. 

I guess before we dig in for the few of you that might not know Adam Jacob or Chef, Adam, do you want to give a quick intro of your background? 

[00:01:42] Adam Jacob: Sure. I think there's probably more than a few who have no idea who I am. I'm Adam. I wrote Chef originally. I'm the CTO at Chef. And I wrote a thing called Habitat not that long ago that does application automation. And that's sort of new stuff. 

And then, mostly, what all of that really boils down to is I've spent the last 10 years going around talking to big web companies like Facebook, and Google, and Yahoo. And I've also spent a bunch of time with startups. And then I've also gotten to go see really large enterprises, ranging from giant banks, to insurance companies, retail companies like Nordstrom, Walmart. 

And so, I get to just travel around and see what everybody is doing and see what they're worried about and try to help them get better in terms of the time it takes, or the speed to deliver, or their organization, or their culture. A bunch of it is software. But a lot of it is just helping people sort of understand how better to build their organization.

[00:02:42] Guy Podjarny: Yeah. I guess there's always this conversation about whether DevOps and – well, continuous deployment maybe is a little bit more specific.

But DevOps is really more about the tools or more about the people. And it tends to be a consensus that's more people.

[00:02:54] Adam Jacob: It's both. Right? If you have a great culture, it's really easy to say that you want to have a good culture. It's really easy to be like, "Oh, I want to empower people." Or, "We want to streamline a process," or whatever. It takes nothing. It's just words. And, usually, it's the technology that reinforces those cultural behaviours that hold you back. 

A good example is, "Oh, I want to do continuous delivery. But we use a terrible source control system that makes it almost impossible to do effective continuous delivery. But because that's the source control system we use, we'll never change." Therefore, which one's true? Do you want to do continuous delivery? Or do you love your bad source control system?

There's this reinforcing circle that hides inside there. And I think you see that all the time in security too. That same circle and that same behaviour, everybody says, "Oh, we want to be secure." I was working with a bank who I won't name and I was on an engagement for a couple of weeks. And one of my first questions, I was going to help them do continuous delivery and they wanted to – their first target was the hardened operating system. 

And I asked them, "Okay, do you have the hardening spec? Do you know what you want to harden?" And they were like, "Oh, yeah, yeah. Of course, we do. We're a global bank." And I'm like, "That's amazing. Great. I would love to see that." And they're like, "Oh, we'll have it for you on the day you arrive." And I left 3 weeks later and they had never found it. 

And what they had realized was it didn't exist. Everybody thought someone else had built that and that it was someone else's job. And no one actually did. It was just this loose conglomeration of stuff that theoretically they were supposed to do but no one actually could track or knew. And that's a global bank. That probably should matter.

[00:04:32] Guy Podjarny: Yeah. And I think sometimes it's about the mess or the fact that tools can help surface information and hold information in a way that is accessible. And sometimes it's just the sheer obstacle. Right? For many of these more sort of complex topics, when you talk about sort of deep ops topics around how containers operate or how some machines are orchestrated. Or in the world of security, when you talk about deeper security understanding about what's an attack. What's not? And that's sort of a constantly moving landscape. Just fundamentally, you have to have tools. You cannot overcome those problems by sheer education. And at the same time, if you have tools and people don't know how to use them or they have no understanding of what they're for, then you would eventually not achieve what you're aiming for. 

[00:05:14] Adam Jacob: Right. Context is everything. Right? Tooling is great. But it's not enough. Right? One of the things that we do at Chef in the security world is we have a thing called InSpec which is a language for letting you describe security posture in code. You can say this particular machine should have this particular policy. That policy means that port 25 should be open. You should be able to auth this way. You shouldn't be able to auth this way. You should be able to talk about packages being installed. You should be able to talk about all of those things. And then relate them back up to the actual security line items and talk about their severity. 

We care about this thing because of this piece of HIPPA. Or we care about it because of this piece of CIS or whatever. And those tools are great because they sort of combine the documentation of this is what the standard is with the check that actually is executable that says, "And are we meeting the standard? Yes or no?" 

And I think whether it's InSpec or tools like InSpec, when you think about that operational part of security and how it applies to those large enterprises, especially large enterprises, but the big web too, more and more, it's becoming that policy is executable. 

That conversation between security, and developers, and operators becomes a conversation around code as opposed to a conversation around documents, which then – and controls, which I think is really the conversation most people have now. Right? Like, "Oh, some security guy wrote a control. And then here's the list of people that say that they can validate the control." But is the control any good? 

[00:06:44] Guy Podjarny: Is it actually happening? 

[00:06:46] Adam Jacob: No. I mean, it's not. And that's true everywhere. Like, "Oh, we wrote a control. Here's the list of 10 people that you could go interview that know how to do that control." Okay, what about the other 10 people who could also do that that aren't on the interview list that wasn't updated in the last year? Do they always do the procedure the right way? Do they always make that process right?" And, of course, the answer is no. But we just sort of let it ride because the auditor passed.

[00:07:11] Guy Podjarny: Yeah. The notion of protecting yourself from audits, not from attacks, become sort of increasingly there. I love that notion. With InSpec, or, in general, with security, one of the key challenge is that it's invisible. If you're not doing it, you really have no immediate indicator to the fact that it's not happening. The user experience, if you will, of not monitoring, not watching for a certain vulnerability and the user experience of watching for it and having nothing happen is the same thing, which is nothing happens. Right? Which is good news. 

I think anything that helps define and articulate the controls that are supposed to be happening, giving you some mechanisms to understand that the action has been taken. So, you know that in InSpec, the blocking or whatever – say, if we use a simple example, the limiting of ports being open, that that has been explicitly articulated and explicitly tracked by the tools here. And you can get informed that it has happened there. We see this in Snyk a lot. We look for – we watch projects for vulnerabilities, for vulnerable dependencies. And then you have really – again, it comes back to the fact that you have the same user experience. If you're not watching a project or if you're watching a project for vulnerabilities and none happen. 

And we have a lot of these conversations with users today about how do you want to hear about it? How do you want to know that you're watching it? And, today, we remain fairly simple. We just remain in this ongoing report that just sort of shows you, "Hey, you're monitoring five projects or something." And they have those things. 

[00:08:39] Adam Jacob: [inaudible] whatever every time there's a change.

[00:08:40] Guy Podjarny: But it's great to sort of increasingly have those and have some requirement, whatever, in sort of Chef, in InSpec that defines – it says, "Well, project X is being monitored for X, Y, Z. So you know it's being enforced.

[00:08:52] Adam Jacob: Right. And then you stick that in the pipelines. When you think about continuous delivery and security, security and the validation of that compliance is part of the process by which software goes into production. And it's part of the way that software gets maintained once it's there. And it's part of that build process. And you bake it in sort of throughout the SDLC instead of it being a thing that happens at the end. 

And that's one of those things that's obviously a good idea.

As soon as you hear it, you're like, "Oh, of course. Security should be baked into the process through the whole thing." 

[00:09:21] Guy Podjarny: The whole build-in versus bolt-on.

[00:09:23] Adam Jacob: Yeah. Right? 

[00:09:25] Guy Podjarny: Of course, you want – 

[00:09:27] Adam Jacob: Sort of, "Duh?" But actually doing it is actually completely a whole other ball game. And the thing that we really came to realize, especially with InSpec, but just sort of in general, is that if you can't figure out how to manage that security posture the same way you manage the rest of what you do, it's really difficult to then tell a software developer that it's their responsibility to ensure that that posture is good or bad. 

Because, sure, they can make sure that they write good code. You can't hear the air quotes. But I was making little air quotes in my brain probably that – 
[00:09:58] Guy Podjarny: Yeah. I can attest to that. 
[00:09:59] Adam Jacob: – it's good. But you can't really ask them to understand the posture of what it's going to be like when it's deployed. Because the distance from a software developer making a decision to a software developer talking about how that software should be in production and what its posture ought to be is so vast. And their ability to influence it is so low that it's really difficult to come back to them and be like, "Oh, this was your responsibility. Clearly, it was on you."

And when those tools give you the ability to talk about it as code, they allow people to participate. You can code review them. You can have security people audit the code as opposed to audit your documentation. And then have those things as a living piece of that deployment model, I think everything gets better and it gets a lot more secure. 
One hard part of doing things that way is that it's very not the way that most security regimes are set up right now. If you go talk to a security officer and you ask them, "Hey, could we remain compliant to whatever the standard is? HIPPA." If everything we did was continuously delivered, nine times out of 10 the answer is just a flat no. 
And what's interesting is that there is one out of 10 where that security officer is like, "I don't know. Maybe. Tell me more about how that would work." And I think if I roll my clock back 3 years, it was 10 out of 10 that were telling me no and that that was crazy. And now it's like 9 of 10. And I predict in 6 months, it'll be six of 10. And then in 2 years, it'll be talking about should you be doing continuous delivery. Or should you be doing Agile? 

[00:11:28] Guy Podjarny: It comes back actually to documentation sometimes or even that document or something that big bank did not have. These things eventually come back to the documents and to the guidelines. At the end of the day, you go down the compliance route and you actually have that in the compliance document. And it says, "Well, if you are using continuous delivery, then, to be HIPPA compliant, you need to ensure that you're doing these things." As opposed to just sort of having the goal – and many of these regulations have the same flaw, the same notion that says, "Well, you have to do these actions." As opposed to you have to achieve these goals yes. They give the vague goals. But then everybody prescribes to whatever recommended actions that have been prescribed because that makes passing the audit the easiest.

[00:12:14] Adam Jacob: Yeah. I mean the relationship between the auditor and your security posture is pretty tight. The real test of your security posture in most cases is the auditor. Not like a pent test or people actually trying to break you. I mean they are. But you're not doing it proactively. 

And I think when you think about those CD pipelines, the idea that they're applied continuously, that as applications change, that you're reapplying the security posture to see if that application has done something that violates that posture. And that when you change the security posture, you're revalidating the applications and you're doing it sort of throughout the whole cycle. It's super powerful. And it's what people are starting to be able to do. I think you don't see it a lot yet. But it's sort of the future.

[00:12:56] Guy Podjarny: Yeah. I think that's interesting to understand the potential versus what's happening. High level, continuous deployment and the whole infrastructure as code the fact. That you've prescribed it, built it in allows you to get predictability. What's where? And you know that a certain test or certain enforcement has been done to the extent of bugs. But at least not as human error. 

On the flip side, it requires security auditors to change how they behave. It's not just about that compliance. It's also the fact that, today, many of these security audits, even you gave the example of a security auditor reviewing your code, are done as gates. They're done as a way that says stop here, which is the antithesis to continuous. The whole notion of continuous is just you roll out. It's okay to pause for a moment. But you can't stop. You can't accumulate the backlog. Because, otherwise, you deteriorate. Unless you automated that.

[00:13:44] Adam Jacob: I mean yes and no. Right? There's continuous deployment and then there's continuous delivery. And they're not quite the same. Yes, in continuous deployment, there's nowhere to pause. I think, in continuous delivery, there is. In continuous delivery, the idea is you're shipping when that you should be able to ship anytime that the business required shipping or it made sense to ship. And that's different than saying we just ship every time you commit. Right? 

And so, in continuous delivery, I think there's plenty of space to say that, "Hey, this project, in order to ship, requires a security review." And that process can still be continuous. The question is that should be the only gate between you and getting to production. 
If they say yes, could you ship? And then the question of course is, is that true for every commit? You could ship today if today was the time. The answer to that is very rarely yes. And so, that's where the difficulty comes in.

[00:14:37] Guy Podjarny: Yeah. And I think it's an interesting comment. I accept the delta between continuous delivery and continuous deployment. But it seems to me that from a security – well, first of all, even from a quality perspective, one of the values of continuous both delivery and deployment is the fact that you spot errors when they're small at any given time. 

Definitely, for continuous deployment, when you ship every small code chain or whatever at some low resolution, when problem occurs, it's much easier to pinpoint to what was the issue, the source of the problem. That value proposition is pretty compelling in security as well, right?

If you are looking for a security flaw, being able to only looking at it within the range of whatever the 100 lines of code that changed versus the 100,000 lines of code or 100 million that existed is very valuable. If you accumulate those changes and you need to now do the security audit, I would argue, you're not actually ready to deliver in that spot. 

[00:15:28] Adam Jacob: I would too. By accumulation, the question is where have you – have you been applying those things in – this is where pipeline shape becomes important. Right? If you're doing continuous delivery well or correctly, I would go so far as to say, then almost certainly what you have is an acceptance environment. 

The idea that you haven't deployed to production is separate from saying that you haven't deployed the code at all. You have deployed the code. You should have been deploying it. It should be in a running environment. You should be able to see it. It should be close to identical to production. And you should be able to run the exact same set of checks and gates that you would run when you deployed it to production. 

And that acceptance environment can live anywhere. In some organizations, like parts of Facebook, for example, that acceptance environment is a slice of production. Some percentage of production users see the acceptance version of Facebook. And that happens to you every once in a while. You'd be like, "That was weird. I saw this feature and then it was gone." 

And what happened is for that minute or whatever you were on the acceptance slice and you were seeing if everything was cool. And then in other organizations, that's a real environment that's completely walled off and users can never see it and whatever. But in that pipeline, in that flow, there's a moment where you're applying those changes, you're doing the same check you'd be doing in production. And then the question is do I or do I not want to ship it all the way out to all of my users? And I think that can still give you the same benefits of a continuous deployment model even if it has a gate. 

The tricky part is that what you can't have is arbitrary numbers of gates. And you can't have gates between environments. How many gates you have matters a lot? Automated deployment into five environments where each one has an arbitrary gate. And it might take you eight months for that code to work its way through production.

And each environment is different. And you don't do the same thing in each one. That's accumulation that's useless. That's danger. Because the delta between the time it takes for your change to go to production is so long and then bad things happen to you for sure.

[00:17:24] Guy Podjarny: Yeah. It becomes irrelevant. I guess maybe if I translate that to practical steps, I would say you want – there's a certain amount of effort that you would be willing to make in just that sort of promotion to production if you will. That is okay. But that amount still needs to be contained in terms of duration. Probably in terms of manual effort. And it cannot scale linearly with the amount of code that's just been shipped. 

[00:17:49] Adam Jacob: Absolutely not. 

[00:17:50] Guy Podjarny: If you have so – I think in the world of quality, you'll talk about how you have a whole bunch of unit tests and all that and you deploy it to some staging environment or acceptance environment. And maybe over there, you're going to run a more extensive battery of tests for its durations, or cost, or whatever, or stress test, or something like that, you cannot afford to do on every commit. But you wouldn't sacrifice. You would still aspire to find as many of those flaws earlier on. 

For security, you would want security controls as part of the continuous delivery aspect of it so that they're always there. It's okay if, in the promotion to production, there's another security audit as long as it's contained – 

[00:18:28] Adam Jacob: Right. I would argue there should be another security audit. Right? You should repeat it.

[00:18:32] Guy Podjarny: Right. Some bigger pentest. But it's probably also – but you probably should aspire to automate that as well. It just might be a more expensive time-wise, cost-wise, complexity-wise. 

[00:18:41] Adam Jacob: Right. There's tradeoffs. Yeah.

That's exactly right. And the thing people don't tend to do is bake it into their pipeline. I personally believe that there's not 100 useful pipeline shapes. I actually think there's maybe one. And we get off track because we talk about – we use different vocabulary words sort of to talk about the different pieces. 

But in the end, the number of people who go through this journey toward continuous delivery and think about having security as a step is quite small. People don't think about it as a stage or like as a phase that opens as you move through deployment. And they have to start thinking about it that way if they're going to be able to push it through the entirety of their organization or if it's really going to transform the way they work until they start thinking about security that way in the same way that we think about operability and the same way we think about reliability. It's a thing that's your responsibility that's baked in from the beginning. And it lives in the same Pipeline and the same flow as the rest of your work. We'll continue to see failure there.

[00:19:39] Guy Podjarny: Yeah. And did you see – I guess from an ecosystem perspective, you mentioned that not that many people think about it from security. Do you even feel that this is sufficiently encapsulated, I guess, for you to indicate whether there's a trend of growing amount of security activity in a pipeline? Do you see in terms of the conversations you have about embedding security, leveraging Chef? Are there recipes that or cookbooks that are specifically focused on sort of some security control or action? And are they increasingly used?

What do you see? 

[00:20:10] Adam Jacob: Yeah. The conversation we see especially in large enterprises is super hot. It's happening all the time. They don't have good answers. And they want them. And that's because there's this macroeconomic trend that's pushing every large enterprise to become a software company who is good at delivering software faster than their competitors. 

Smartphones, how many – I took an Uber and I didn't take a cab. Cab's cheaper. But it's pain in the ass to get the cab. So I didn't do it. And that same thing is happening in retail. It's happening in all these different markets. And so, that, over the last couple of years, has floated to like an executive C-level conversation. It wasn't before. Three or four years ago, we were having that conversation with like an operations director or maybe like a VP of engineering or something. But you weren't having it at the like CIO level, the CSO level, the CEO level of Citibank or someone like that. You are now. Right? And it's not one of them. It's all of them. Right? And so, the question for them really is what are we going to do? And how will it work? And so, that trend I think is pushing for answers. 

I think when it comes to specifics, Chef, the software, has – you can think about it as Remediation in a security context. When you run Chef, it checks to see if the system is configured the way you said it should be. If it's not, it fixes it. There's lots of Chef recipes that do security posture where it's both – each resource is a check to see if it's right. If it's not, then the remediation is sort of baked in.

And that is pretty effective. 

And what's interesting is how much of that you can or can't share. And so, there's some things you can share, like the CIS benchmarks or whatever. They are what they are and they tell you precisely what to do. And then you look at like Sarbanes-Oxley, which basically there are no rules except for the rules that the auditor tells you yeah.

However, the auditor interpreted the very small piece of legislation, that's your business now. 

And so, there can't be a generic Sarbanes-Oxley checker or like a generic Sarbanes-Oxley recipe that sort of fixes you. Because every organization's requirements are different. I think one of the things that you see in that space is there is more sharing than there used to be. It's mostly around best practices. Or it's around the sort of obvious benchmarks that you can hit really clearly. 

And then for any given security posture, for any given organization, maybe they can use something to start out with. There's a template or there's an example.

And you can sort of start with that and then you can add in the things that are specific to your organization. But there's always enough specific to your organization that there's a bit of a heavy burden. 

And I'm not sure how that gets resolved. Because it feels a little inherent to the nature of security within organizations that are still being built or already exist. You have an investment that exists in a particular posture. And so, from a developer point of view, I don't know how much impact you can have there other than trying to sort of understand the specifics of what your life looks like already.

[00:23:10] Guy Podjarny: I think a developer can still quantify the requirements. Not quantify maybe. But write down the requirements just like a security person would into those recipes. It's all about that predictability. 

I mean when you talk about one requirement coming in about which ports are open. And that requirement could come from a security person. But, really, if you're a developer and you've received it four times, you could also take initiative and put that in yourself. 

[00:23:34] Adam Jacob: You should. 

[00:23:35] Guy Podjarny: Yeah. And the other aspect is areas of application security, which are controls that the security team cannot keep up with. So you know that you've just installed you know an Express web server on this thing or that you're sort of opening this WebSocket port. And you can try to put or add some initial controls to talk about it. You're the ones pulling in open source packages and you want to put some controls around their viability. You might want to – 

[00:23:57] Adam Jacob: Revision. Origin.

[00:23:59] Guy Podjarny: Exactly. Their version. Known vulnerabilities. And then you would want to also control the actions maybe that the system is a whole is willing to do.

You can also define what actions come out. 

And especially, I think also this notion of there's the dev, the ops. I'm a little bit hesitant to sort of separate those two. Right? In the era of DevOps. But there are sort of there's Dev actions. There are the Ops actions. And there's the security actions, which are just way, way too separated today than they should be. 

As containers, for instance, come in and you're building a container and you're shipping it somewhere, or as you're building your software using some IT automation system like Chef, then really the separation between Dev and Ops is very, very blurry. But at the same time, somehow, security hasn't gotten to that mix. Still, for security, we expect – or not all of us. But all too often, expect some other security team to come in and tell us what is it that you're supposed to do here and say that responsibility. 

[00:24:59] Adam Jacob: Yeah. We just don't know who.

[00:25:00] Guy Podjarny: Yeah. Precisely.

[00:25:01] Adam Jacob: Yeah. I think that's exactly what – I think that's still true. I think the thing about DevOps that's easy to misunderstand in my experience is that it's not that people are generalists. I'm a systems administrator. And I'm really good at it. And mostly that's because I've been doing it for a really long time. 

I was 16 when I got my first job. And I'm 40. I've just done it enough that like I'm pretty good at it. But does that make me immediately a great security engineer or a great software developer? The answers are no. I'm not bad at either one because my discipline is close enough to those disciplines that like I sort of understand what's going on.

But DevOps is actually about having high-functioning teams of specialists who, because they're high-functioning and because they're working together, come to a better understanding of the holistic system. 

And so, when you think about security and security engineers, and then you think about software developers, the goal for a software developer I think is to get closer to those security engineers and to get closer to those operations people so that they better understand how the software they build can support that piece of their software's life cycle. Because it's not like you write the software and then it goes off and gets deployed and then some security stuff happens. 

In fact, all of that stuff, applications without infrastructure don't exist. Infrastructure without applications are wasted heat. Right? And if there's no security on either, then your customers will eventually stop trusting you as they should because you'll be awful. 

Understanding that that ecosystem is holistic and that you need all of those components to make it work means that you have to build a team that's capable of doing it. And then those teams need a way to work together. And I think that's what's pushing that trend toward continuously delivered security. 

Where do we work together? And the answer is on the code. And then the question is, "Well, I'm a security guy.

Where do I put my code?" If I did that, which I don't now.

But if I did, where would I go? And like providing those answers, that's like the next frontier. And I think sometimes it's stuff like InSpec. I think sometimes it's things like better active scanning. It's better penetration testing. There's a bunch of those sorts of things that are hiding in there. And it's because not one size fits all.

Because it depends on the environment and it depends on your problem.

[00:27:24] Guy Podjarny: Yeah. I think I really like the idea of allowing a security person to write their code into that pipeline. Because, again, it doesn't sort of shake off the responsibility. Just like you might put some Ops requirements, it doesn't shake off the responsibility of building operable code. But it does allow the expertise to come into the code, into the continuous pipeline.

[00:27:42] Adam Jacob: Where other people can see it.

[00:27:44] Guy Podjarny: Yeah. Can see it. Can learn from it. Maybe can put it after the fact probably to the next pipelines. 

[00:27:48] Adam Jacob: You can ask questions. It's right there. And you review it. Who reviews a security office – a security person's code? And the answer should be the developers and the operations people who have to deploy it. And vice versa. Right? And that's just not how we work in a lot of cases. And once we start working that way, everything gets easier. 

[00:28:04] Guy Podjarny: Yeah. Although you do need to – I think this is very much an environment in which we need better tools. Because most security tools today are not really built for that mindset. They're built for sort of audit purposes, for sort of manual operations.

It's really functionality like InSpec that would allow you to say, "Hey, you are not allowed to open a port," or whatever.

Again, I'm sort of sticking to the simplest environment here. And it's maybe sort of some vulnerability feeds for the Ubuntu or the Red Hat sort of patch fixes that allows you say, "Well, the system needs to be unpatched." 

But for many of these other things around vulnerability scanning, around fixing those things, around pen testing, around authentication test, just there are amazing technologies out there absolutely and entirely not built for this use case. Probably useless in this context. And we need a new breed of tools. Right? 

[00:28:55] Adam Jacob: It's the next frontier in terms of that tooling. Because you think about where that stuff fits in and how it's going to work. And it's so important. And it will absolutely get there. In the same way that you look at – you can look at like the popularity of languages, right? The difficulty of getting a language into CI is a huge burden in the adoption of a new language. And it's because until that tooling support gets to a certain level, people just won't – they won't do it. The number of people who will invest – 

[00:29:23] Guy Podjarny: There's only so much adoption you're going to get.

[00:29:24] Adam Jacob: There's only so much you're going to get. 

[00:29:25] Guy Podjarny: The die-hards.

[00:29:26] Adam Jacob: Yeah. And so, in security, because it's showing up a little late in the conversation. But not too late. Time is funny. If you've been thinking about it for a long time, it always feels late to you. But for most people, it's super new. I think that you're going to see those ideas refashioned in a way that makes sense in a world of collaboration and sort of continuous delivery. And it's super interesting and exciting. 

In terms of being an entrepreneur or thinking about security as a place that you could invest as a software developer, it couldn't be a better moment. Right? Because sort of the game is completely open on a bunch of probably very traditional industries and really entrenched software that you could probably displace it simply by fixing the user experience around collaboration and continuous delivery and steal a significant chunk of market share.

[00:30:18] Guy Podjarny: And fix it also in the right spot. And, again, kind of fix it for the future a little bit. Fix it for the systems that are being built right now and then retrofit to the back versus the other way around.

[00:30:27] Adam Jacob: Yeah. That's right. 

[00:30:28] Guy Podjarny: On that topic a little bit on tool rethinking, existing models, I know you've been working a lot on Habitat over the last year or two. Do you want to talk a little bit? We had some conversations about the security angles within that. Do you want to give probably a quick review of it and then talk about those? 

[00:30:46] Adam Jacob: Yeah. A super high-level overview of habitat is that it's – we call it application automation.

And what we realized was that if what we were doing in all these organizations was really getting to a place where we were trying to ship applications faster and better, it's weird that all of our automation starts from the infrastructure and works up. It's just a weird starting place. 

And so, Habitat starts from the opposite direction. It says, "Okay, well, what's an application need in order to make that stuff work?" And then kind of works its way down. An example of that is you should be able to build the application. Have an artifact. That artifact should have all the things that it needs to run in every environment that it's going to run in on any runtime it has to run on. And that should include all of its runtime configuration. It should include all of its dependencies. It should include everything that it needs to do. And then it also needs to include the infrastructure that it might need in order to do things like deploy into a complicated topology. Or to update itself in a smart strategy like one at a time or in pieces. 

And then you can start to make the conversation around how an application behaves be separate from the conversation about what infrastructure it runs on. Right? And what we have tended to see historically is that the infrastructure it runs on dictates the application in terms of its shape. 

You're like, "Well, if you want your application to be good to manage, then you have to run it on Kubernetes. Or you have to run it on this infrastructure style." And Habitat sort of flips that around and says, "Well, actually, the application should just be easy to manage." And then there's a conversation about what's the right place to run that application based on its own needs, and posture, and all the rest of it? 

One of the things we had to do when we built Habitat was think about things like, "Okay, how do those applications deploy? And how do they bring along their dependencies?" One real problem in continuous delivery is you have your application. Let's say you use Open SSL, which is not crazy. A lot of applications in the world. 

[00:32:37] Guy Podjarny: Yeah. Most likely, you are. Yeah. 

[00:32:38] Adam Jacob: Right. And if you're not, you're using it through some other thing you built also that does need it. The odds that you rely on that somewhere in your stack, pretty high. Right? 

And so, you have this moment where there's a security vulnerability in Open SSL. And so, if you card along all of your dependencies and you're responsible for your whole environment, how do I know that that vulnerability exists?

Then how do I know that the software I'm running is running the right version of Open SSL? And the historical answer to that is, well, I patch the machines. I go to Red Hat, or whatever, or Debian, or Ubuntu, or Windows, or whatever. And I run the updates. And now I'm secure. Right? 

The problem with that is that it's secure on disk. You've definitely updated the library. Is the application that's using it actually been restarted and read that library into memory? And how do you know? And if you're using something like Chef, if you wrote the recipe right, then the answer is you could be pretty sure. Because you can put a trigger that says, "If this package is updated, restart this service." 

[00:33:38] Guy Podjarny: Predictability at its glory. Right.

[00:33:40] Adam Jacob: Yeah. But now you're relying not – but that relies still fundamentally on the idea that someone knew to write that way. And that someone who reviewed it reviewed it and got that done. And Habitat flips that over.

And, basically, we had to go all the way down to the bottom of the build system and build a packaging system that allows you to be really explicit about your dependencies. And then have those dependencies be the only possible things you load into memory. 

If there's a new version of your application, it might be because you have application code that changed. It might also be simply because a dependency moved. Right? And now you need to rebuild and make sure that your application now works on top of that dependency. And Habitat, the experience of doing that when we started, I didn't think I was going to have to go that deep in order for that property of being like, "Okay, how can I tell you for sure what's running when you say I'm running this version of this piece of software?" And the truth was the only way to get it was to go all the way down to the dynamic linker and make sure that the only possible thing you could link to is the right version. That was fascinating.

[00:34:44] Guy Podjarny: Yeah. It makes perfect sense. It feels sometimes a little bit harder to digest when you come at it with the mindset of these containers, and VMs, and kind of that evolution. But at Snyk we had – actually, this was at devopsdays Amsterdam, I had this conversation and talked about how in Snyk, in the code itself, we would unravel the dependency trees, which in code is actually very much declarative because it's just in node. It's a pack of JSON file. And it says which dependencies are there. And it's all your sort of software bill of materials if you will. It's very implied. 

And then I had this conversation with somebody after the talk, says, "Hey, can you sort of do these same for steps? We talked about there about addressing those. Can you do those four Chef cookbooks?" And we had the whole conversation. And I've been spending a lot of time thinking about this since. And it seems like in the world of Ops today, you have the explicit dependencies and the implicit dependencies. Because you're bringing along, you're carting along these VMs at the end of the day no matter what. You call them, right? The sort of package operating system that have all these implicit dependencies – 

[00:35:39] Adam Jacob: You're bringing the userland along.

[00:35:40] Guy Podjarny: Yeah. You're just sort of dragging them along. And it's really hard to track those know. How they're processed? It's much easier with the explicit dependencies. Not the implicit ones. And it sounds like in Habitat, you're basically killing this notion of implicit dependencies. It's all explicit.

[00:35:52] Adam Jacob: That's right. And that was really hard. As developers, it was really hard to decide that we had to go that deep. But like an example there is you can take a Habitat artifact and then you can export it to various formats. You could export it to a container or to a VM. 

And what's in that artifact is nothing except your application. It's explicit set of dependencies and the explicit user land you decided to bring along. If you can get by with just like a busy box userland, that's an explicit runtime dependency of your application. And, therefore, it will appear. 

If you don't make it an explicit runtime dependency, there might not be a 0userland at all other than a kernel if you needed it if it was a VM. And that as an approach is so powerful. It takes a minute because you have to build an ecosystem of software so that people don't have to spend their time – 

[00:36:48] Guy Podjarny: Fortunately, we've done that once. 

[00:36:49] Adam Jacob: Yeah. Ideally, we're going to do it one time. I think we're at like 250 packages or something like that in core right now, which is a lot of the most common are already there. But everybody else's dependencies are common. And then your dependencies are always the weird ones is sort of the rule. 

But I think when you think about how you get those guarantees, especially in security, I think the future is really rethinking the fundamentals in the frame of saying, "Okay, when we built these systems 20 years ago, our goal was to be able to use them at all." We needed to build them. Human beings were going to deploy them. The pace of innovation was slower. The pace of change was slower. We weren't delivering as much stuff to the internet. It was still weird to have a good website if you were a retailer. Amazon.com was losing money. That was a joke. Right? 

[00:37:39] Guy Podjarny: That's true up until two days ago or so.

[00:37:41] Adam Jacob: Yeah. And so, when you think about how it's going to work in terms of the future, and security, and also with Habitat, that willingness to drive deep and to say, "Well, actually, right. There's this principle here that's really valuable. Active scanning." It's really valuable to be able to take an arbitrary artifact and scan that artifact and tell you what's inside and like what its dependencies are. That's incredibly valuable. It gets a lot easier when you can look at that artifact and 90% of the data is standardized. 

But in order to get to that place, you have to be willing to break a lot of glass in between here and there. And those investments, they're just expensive. And I think as an industry, we have to make them. And as software developers, they're really fun to make. 

It's weird to get permission, for example, to like go recompile Linux from scratch so that we can give you a toolchain from [inaudible 00:38:34] that has explicit dependencies and no implicit dependencies. That was a super fun project. Because who tells you that that's what you should do today? You know what I mean? It was awesome. And people who built it had a really good time because they came to work one day and we were like, "All right. Yeah. F it. Let's do it. Let's get it done." And we did. And it's going to take a minute. And that's fine. And we're willing to make that investment. 

I think you'll see more and more both security companies and individual software developers. The InSpec guys, that wasn't a corporate project for us. They were two guys in Germany who worked as consultants and they built InSpec to solve that problem and they saw that niche in the market. And they did that as software developers. And they didn't have security backgrounds. I mean they did. But not – it wasn't – they weren't like – 

[00:39:20] Guy Podjarny: They weren't security people.

[00:39:21] Adam Jacob: They weren't security guys necessarily. I mean they were. But they were software developers first. 

[00:39:26] Guy Podjarny: Not their primary job. Yeah.

[00:39:28] Adam Jacob: Yeah. And so, when you think about developers and security, for me the other piece of that that's really important is how much developers are going to drive that revolution in the way we approach security. There's so much interesting software development to be done. And you just need to find it. 

And then when you get there, you get to rethink all of these really fundamental assumptions about how the system operates. And you get to gain all this knowledge about weird esoteric internals, which if you're that kind of nerd is the best. 

[00:39:55] Guy Podjarny: There's a quote from – I forget if it's Gene Kim or Josh Corman. Talks about how to fix security, you need to leave security. And I kind of like that sentiment in the sense that you have to sort of break out of the confines of your perspective how you look at these things. Same concept maybe here for sort of packaging for Habitat. And then when you sort of combine. Maybe even never having been necessarily first primarily identifying yourself as a security person. Still have this opportunity to now – with this case, specifically for instance, do a substantial improvement for the security poster. On one hand, it's better predictability. It's reduced attack surface. It's the opportunity to insert security controls as a component of the continuous delivery of the software that sets up your system. Yeah. Sounds like a lot of potential here. And most definitely sounds like a fun project to work on.

[00:40:43] Adam Jacob: Yeah. I think it's super fun. But I think when you think even outside of Habitat, just the more broad question of, "Software developers, what can they do tactically to make impacts in security?" There's always that tactical list of things that's we'll think about that posture. Run scans. There's tooling you can use that is helpful. There's books you can read. 
I think, more than anything, it's be willing to understand – pick a thing that's interesting and be willing to dive deep into understanding how it works and why. And you hear a lot about like don't write your own cryptography. Right?

This is good advice. Because it's hard. Also, what that tends to mean is that people don't investigate how cryptography works because they've been told how hard it is. Do you know what I mean? 

[00:41:29] Guy Podjarny: Yeah. 

[00:41:30] Adam Jacob: and the truth is the only thing that's hard about it is finding out where to start reading and then how to go from one paper to the next? But once you do that, the mystery goes away. Now it's still mysterious. 

[00:41:43] Guy Podjarny: It's complicated.

[00:41:42] Adam Jacob: Yeah. It doesn't make you Dan Bernstein because you read the papers. But it's not like incomprehensible gobbledygook, you know? You can get it. And as a software developer, the more you invest in understanding those fundamentals from other disciplines, the better you get as a software developer. And when you think about that arc of your career, it's those abilities to play in other fields that are what make people more senior, and make you more valuable, and make you better on teams.

[00:42:12] Guy Podjarny: Yeah. Especially in the security space, which is one of the best assets to accumulate these days.

[00:42:17] Adam Jacob: It's huge. If you think about income, you think about like job security, it's an incredible space to be in. Right? 

[00:42:23] Guy Podjarny: Yes. That sounds like a really, really good advice.

[00:42:25] Adam Jacob: Yeah. 

[00:42:27] Guy Podjarny: This was a great conversation. Thanks a lot, Adam, for coming on to the show.

[00:42:31] Adam Jacob: Thanks for having me.

[00:42:32] Guy Podjarny: And I think that's it for us today. Thank you.

[00:42:34] Adam Jacob: Bye.

[OUTRO]

[00:42:36] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones as well as over 100 videos about building developer tooling companies given by top experts in the field. 

Snyk est une plateforme de sécurité des développeurs. S’intégrant directement aux outils, workflows et pipelines de développement, Snyk facilite la détection, la priorisation et la correction des failles de sécurité dans le code, les dépendances, les conteneurs et l’infrastructure en tant que code (IaC). Soutenu par une intelligence applicative et sécuritaire de pointe, Snyk intègre l'expertise de la sécurité au sein des outils de chaque développeur.

Démarrez gratuitementRéservez une démo en ligne

© 2024 Snyk Limited
Enregistré en Angleterre et au Pays de Galles

logo-devseccon