Skip to main content
Episode 119

Season 7, Episode 119

Securing The Modern Software Supply Chain With Adrian Ludwig

Guests:
Adrian Ludwig
Listen on Apple PodcastsListen on Spotify Podcasts

The software supply chain is anything and everything that touches an application or plays a role in its development, from the beginning to the end of the software development life cycle (SDLC). As you might imagine, this makes software supply chain security a somewhat complicated task! Today, we are joined by returning guest, Adrian Ludwig, formerly of Nest and Android and now Chief Trust Officer at Atlassian, to discuss what ‘software supply chain security’ actually means, why it matters, and how you can help secure the supply chain of your product. As a self-described hacker in his early years, he was recruited by the Department of Defense at just 16-years-old, and worked with them for several years to find security flaws in cryptographic and computer network systems. He has a fascinating lens through which he views today’s topic and, as you’ll discover in this episode, he has a real talent for clearly and efficiently explaining very complex problems. To learn more about Adrian’s interesting take on SBOMs and find out which processes, tools, and practices to invest in, make sure to tune in today!

Teilen

EPISODE 120

“ADRIAN LUDWIG: The percentage of the code that's written by developers inside of your environment and under the remit of your specific organisation is maybe 10%, maybe 15% of what it is that you actually ship. Whether you're in cloud, or whether you're producing a mobile device, or whether you like, all of these, it's a conglomerate of third-party services, third-party software. 

Could be licensed software through a negotiated license, could be open-source software. The vast majority is probably open-source at this point, which is super interesting, because they don't even know that you're using it, so they don't know that they're part of your supply chain.”

[INTRODUCTION]

[00:00:36] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure application without slowing down. Fixing vulnerabilities in code, open-source, containers and infrastructure as code. To learn more, visit snyk.io/tsd. That’s snyk.io/tsd.

[INTERVIEW]

[00:01:25] GUY PODJARNY: Hello, everyone. Thanks for tuning back in. Today on the show, we have Adrian Ludwig coming back on the show. He was on as Chief Security Officer of Atlassian. Now, he’s Chief Trust Officer with more responsibilities. Adrian has a very impressive security background. You can hear more about his story building, the mobile security models back in Android, and also, a little bit of his marketing career, or stint at Adobe in the middle. He has a really good lens and very great definitions and explanations about complicated problems. Today, we're going to pick his brain around supply chain security, which is a topic that has its own complexities on it. Adrian, thanks for coming back on the show, this time, I guess, as a Chief Trust Officer at Atlassian. You were a CSO last time.

[00:02:09] ADRIAN LUDWIG: Yeah. Evolving. They always are. Thanks, Guy. Happy to be here.

[00:02:13] GUY PODJARNY: This episode, we're going to dig into supply chain security. You're leaning into this as well, you have different lenses on it, so I'm looking forward to understanding a little bit what's in your head, how you see the space and some practicalities. For start, can you describe a little bit about how would you describe software supply chain security? What does that entail in the first place? Small question.

[00:02:34] ADRIAN LUDWIG: Yeah, yeah. I mean, it's a multi-armed octopus. I'm not sure what the right metaphor here is. The reality, I think, for most – actually, probably for everywhere at this point is that the vast majority of the software that you actually ship comes from somewhere else. The percentage of the code that's written by developers inside of your environment and under the remit of your specific organisation is maybe 10%, maybe 15% of what it is that you actually ship. Whether you're in cloud, or whether you're producing a mobile device, or whether you like, all of these, it's a conglomerate of third-party services, third-party software. 

Could be licensed software through a negotiated license, could be open-source software. The vast majority is probably open-source at this point, which is super interesting, because they don't even know that you're using it, so they don't know that they're part of your supply chain.

[00:03:21] GUY PODJARNY: They being the maintainers of the open source.

[00:03:23] ADRIAN LUDWIG: Yeah, the maintainer of an open-source project has no idea who it is that's using it. That's one of the things that we've been trying to tackle and thinking about open-source security is how do you even connect the dots of who to notify if you have a vulnerability in those kinds of things?

I think at this point, the biggest things for us are the well-defined elements of our supply chain are ones where we have an explicit license agreement and mutual understanding that a piece of technology is going to be used in our environment. Then the area that we have the greatest exposure in supply chain is open-source software, where we know we're using it, we have an obligation to maintain it and make sure that it's being kept up to date in our environment, keep an eye on it, test it, etc. The maintainers probably have no idea that we're using it.

[00:04:04] GUY PODJARNY: Yeah. Yeah, interesting. I guess, how do you think, like within software supply chain security, sort of a mouthful, oftentimes, there's at least one split is between the components themselves that you're using, versus maybe the journey that they make on it. Do you see them as the same? Do you see them as different? How do you think about those two?

[00:04:22] ADRIAN LUDWIG: Yeah. It gets complicated really fast. In an ideal world, each of the components that we’d be using, we would have a sort of singleton that we're able to manage in a well-defined way inside of our environment. The journey would be a well-defined journey as well. In practice, in any organisation, our scale, and any organisation, probably bigger than two pizza teams, there are going to be multiple instances. They're going to migrate it at different times. They're going to have dependency chains that are different. They may or may not be imported as an object. They may be imported as source code and then sufficient and then been forked at some point. That journey and understanding what that journey is definitely an area of complexity.

One of the things that I'm very intrigued by is trying to see if we can reverse engineer those journeys and then make them simpler, so that we can move to a world where we're just managing the objects. People talk a lot about SBOM. The reality is SBOM describes just the objects. It doesn't begin to incorporate what those journeys are, and so we're a long way off as an industry from really beginning to understand the complexity of the situation.

[00:05:25] GUY PODJARNY: If you had to choose between the two, one being which components you're using and whether the maintainers know you're using them and whether they're secure, and the other is how the components are being consumed, which – The security flaws in each of those, which one keeps you up more? Which one is more concerning?

[00:05:42] ADRIAN LUDWIG: I mean, I think it's probably going to change over time. I think at this point, just knowing what the Lego bricks are that you're using is sufficiently challenging. Getting to a point where you can manage it at that level would be really helpful. Then I think the next phase is going to be dealing with how they're being ingested, how they're being manipulated, how they're being managed. Now, it's just inventory.

[00:06:02] GUY PODJARNY: Yeah. I think I agree with that. I think it just fundamentally, it assumes securing the journey, assumes you know who’s journey, how are you even tracking. We talked about the open-source components. Maybe there's commercial software that you're consuming. Maybe that's a little bit. What about the actual build system? That's often times also touted in supply chain security. Is that number three on the list? Where would you rank securing your own build systems from being tampered?

[00:06:28] ADRIAN LUDWIG: Great question. I think I would put it as number two. What you're intending to put into your production environment is this step number one. Then step number two is, well, maybe the thing that I'm intending to put into my environment isn't exactly what I'm putting into my environment, which is your development environment, your tool chain. Is there a compromise there? Is there somebody who's doing something malicious in that environment? Obviously, that's a priority. I think it's less of a priority than the unwitting, accidental vulnerability that can be introduced by just not knowing what it is that you're actually shipping. After that, I think is where you begin to get into smaller and smaller exposure through commercial products, for example, or third-party services that have been integrated. That's the 123 that I would stack rank it.

[00:07:12] GUY PODJARNY: Yeah. Makes sense. Within the title of supply chain security, for the products that we use internally, what would you say are the top practices that you need to adopt to do this well, to secure them well?

[00:07:25] ADRIAN LUDWIG: I think a lot of times, there's a discussion, “Oh, I need to see the SBOM. I need to understand what's inside of your environment. Oh, I have a security review that I need you to complete as a vendor. Show me all the details of how it is that you have implemented your thing.” There's a lot of focus on the actual third party. What I think I have found is that there's less focus on how that integration is going to work. In particular, how isolated is it going to be?

If I assume that that vendor has been compromised, or if that component has been compromised, does that necessarily mean that my entire environment collapses? I think the best security organisations are likely to be ones that are thinking about each of these components, and then thinking about how those components are integrated into their environment, whether it be an actual software component, or whether it be a third-party SaaS service, or whatever. And defining the boundaries between them and the expectation, and then monitoring those boundaries. 

That, I think, is the holy grail for a well federated internal environment, is one where you're using lots of services that don't have systemic risk across them, and they're isolated from one another. So that if your e-mail provider gets compromised, it doesn't take down your messaging provider and it doesn't take down – That I think is where you ultimately want to end up. 

[00:08:40] GUY PODJARNY: In this context, you're leveling up the supply chain security definition, well beyond open-source components. You're saying, just in general, in your software factory that you're building and you're deploying, as you deploy, say a microservice to the cloud or a system to the cloud. Or maybe it's not the cloud, but doesn't matter, for all intents and purposes.

[00:08:57] ADRIAN LUDWIG: It's true for component as well, right? You should have a well-defined understanding of what it is that's flowing into that component, if you can. I mean, it's tricky. I have the naive simplicity of a high-level architect, as opposed to somebody who's actually wiring these things together and realises that you have no idea. Yeah, if you have a component and it's operating only within a subset of microservices, then you can actually enumerate what it is that could be exposed to that component. Then you can reason about what level of risk is associated with that component being compromised. 

We don't tend to think that way. I think as humans, we tend to think it’s as a soup. If you architect it to separate these pieces out, you're not actually accepting a lot more risk by having a more complex supply chain, because you've isolated across the different components.

[00:09:41] GUY PODJARNY: In practical terms on it, which – I guess, which processes and which tools would you need to successfully do that?

[00:09:48] ADRIAN LUDWIG: For us, one thing that we do in our production environment is microservices are considered to be a security boundary. There's authentication between them. There is encryption between them. The set of interactions that is expected between those microservices have policy applied to them. In practice, it's difficult. But in theory, you can very quickly say, “Ah, this component was compromised. Maybe it was compromised because somebody found out that there's a vulnerability. Maybe it was compromised because somebody found out that the component actually been manipulated, like we've seen in NPM and a few other places, not to namely.” It could be happening anywhere, I promise.

[00:10:21] GUY PODJARNY: Yeah

[00:10:22] ADRIAN LUDWIG: It’s a place that we've seen it. Then you can say, “Okay, what could that have possibly accessed?” You actually can reason about it, which is very different from a classic monolithic environment, where you just have no idea, because it could have access to everything, and there's no way to constrain that. We see that variation when we look at, for example, on-premise products versus cloud products. Cloud gives you the ability to have that architectural isolation between microservices and on-prem doesn't. The same vulnerability in the two different environments has a very different impact.

[00:10:55] GUY PODJARNY: Yeah. I think that makes a lot of sense. If I echo this back, though, you're saying, in practical terms, it is easier to apply this philosophy at the resolution of say, a microservice. In those cases, there's something very practical you're doing here, which is you enforce those as a security boundary. In concept, though, as a community, as an industry, if someone builds further tools, and I know there's a bunch of startups trying to do things in this world, you can actually further reduce the security boundary to a component level. At the moment, if you're going to implement something, which you are, like in your own environment, microservices is a reasonable place to start.

[00:11:29] ADRIAN LUDWIG: Yeah. Especially if you have something that's – if you have an application that can be broken into a 100 or 200 microservices, then you get a huge amount of isolation by having done that breakdown.

[00:11:38] GUY PODJARNY: Yeah. I love that fundamentally, that's just a good security practise. It's also a good reminder that while you may not have full visibility, full understanding of the components you consume, or their journey, or whatever it is, what you can do is you can just put them in a package that constrains what they can do and monitor that package.

[00:11:56] ADRIAN LUDWIG: Yeah. It's relatively small enough to be useful. The interaction from one microservice to the other, it's templatized. The set of API calls that can be made, it's pretty narrow. The set of data that can go back and forth is pretty narrow. It's not hard to imagine a world in which you can do monitoring of that with a very high degree of precision to make sure that there's nothing fishy going on from one to the other.

[00:12:16] GUY PODJARNY: Yeah. Not to name specific names, but do you feel like the tooling in the ecosystem today is able to give you that more out of the box? Or do you need to invest a fair bit at figuring out those templates and enforcing them? I mean, how much can you purchase your way in tooling to this, versus things you have to build?

[00:12:33] ADRIAN LUDWIG: I actually don't know. I suspect, if I go talk to vendors or probably a dozen vendors, they would tell me they could do it right out of the box. The environment that we're in, we've done a lot of it in-house already. We haven't really looked for it. Now, my suspicion is that the types of automated things that are out there are probably are a lot better than what we have. That's what I would expect. If you have a team of 50 engineers that are working on something specifically to bring it to market, then this is a classic trade-off, right, is do I want to invest in something that's bespoke to my environment? Or do I want to go along with a larger investment that's controlled by a variety of different customers that have input into their solution? Yeah. Right now, what we have is mostly bespoke.

[00:13:17] GUY PODJARNY: If you architect it, if you build it yourself, then you can structure it in that fashion, because you build it on a somewhat consistent platform that applies those templates. This approach or what we talked about up until now indeed focuses on internal applications. You're running them, you're hosting them, and so you can apply those parameters. Maybe let's shift gears to ones that you deliver at Atlassian and you also ship software that those are the one with their system. I guess, with the new supply chain security adages, what has changed in the practices you need to have to deliver a software to customers?

[00:13:50] ADRIAN LUDWIG: Expectations for what is acceptable in terms of providing visibility into what's in your supply chain, and how quickly you respond to issues that arise in your supply chain have changed pretty dramatically in the last two years. Within 24, maybe 48 hours of a significant open-source vulnerability being announced, we have customers banging on our door, asking us, “Have you scanned your entire environment? Have you fixed every single instance of it?” It's like, “Hang on.” Yes. Fortunately, we're usually able to answer. That rate at which people demand answers has radically changed. I think you need to be at this point capable of scanning your whole environment and being able to answer those kinds of questions. That's just where the direction of the industry is going. 

I think the other element is, even if something is not vulnerable, if it exists in your environment, there's an increasing expectation that it will have been fixed. Three or four or five years ago, it was easy to say, “Yes, that vulnerability exists. It's in an area of code that we do not invoke. Therefore, it's not really a vulnerability. It's just a bug. It’s basically in dead code.” At this point, we have lots of customers that are saying, “That's not good enough. I hate to say this, but we fundamentally don't trust your analysis and we think that no human in the world is possibly smart enough to make that conclusion. Go fix it.” It's like, “Well, hang on.” That is the guidance that we've always received from NIST. Prioritisation of things that are actually exposed is we know that that's the right way to go, but there's just such a deep-rooted, lack of confidence.

Really, what it is is it's overconfidence in the exploitation community. That makes it really difficult to reason about those things. I think we're moving towards a model where if anybody finds a vulnerability in any library, then that library has to be up to date. That ends up creating a lot of back pressure on making sure that you're always up to date on everything. Because if you skip one, two, three, four, five releases, because there's no vulnerability in those updates, then you have to want to – In the sixth release, a vulnerability is there. Whether you're exposed or not, you need to fix it. You need to do it very, very quickly. You need to have the training within your team and your organisation to be able to do that. You can't have tech debt of five releases or six releases.

[00:16:13] GUY PODJARNY: That's super interesting. Just to, first, establish the baseline, you're saying, you need to get your act together about dependency management and tracking what you are using in the first place, before we ever gotten even to – We haven't even touched the delivered SBOM to the customer. You need to know, because customers are going to demand it from you a lot faster.

Then second, and it sounds like you're saying you're partly complaining about this. You're saying it's beyond logic. It's just this general fear. Maybe, I mean, if I dare to say, it's also a little bit of mistrust in how some vendors in the past have tried to brush vulnerabilities that were real under the carpet, because it was easier than to actually face them. Now, there's a much more substantial demand to actually upgrade the libraries, regardless of whether they are exploitable or not.

[00:17:02] ADRIAN LUDWIG: Yeah. Which, it’s super interesting. I go back 30 years in the software space. I would argue, early on, there was a lot of work on prioritising and making sure that the security organisation was never sending something that wasn't exploitable. Microsoft was right at the forefront of saying, basically, asking our security team to prove that something is exploitable is a waste of time. We're just going to assume that it's exploitable. I think that hasn't really sunk in to the relationship between security teams and engineering teams. There's still this constant, “Oh, how would you prioritise this? Oh, how would you demonstrate that it's real?” That kind of negotiation. What we're seeing now is that the customers, basically, are not willing to participate in that. They're just going to say, “Assume everything is exploitable.” And you have to go with that.

[00:17:48] GUY PODJARNY: You have to fix this. Yeah. Let's talk about the SBOM piece, because the other aspect of this is today you're expected to proactively provide that list of components to a customer. How do you handle that?

[00:18:00] ADRIAN LUDWIG: Yeah. SBOM is super interesting to me. I think there's a couple different ways to think about it. One is that it produces a sense of transparency. In general, I think, a sense of transparency leads to better alignment of incentives by the organisation that's creating a piece of software. I think that's a hugely valuable thing to think about SBOMs is as soon as I have to describe what's inside a piece of software that I'm making, that I'm going to be very, very careful about what's in that piece of software. That's one optimistic SBOM is good, and it will drive the effects that we want.

The other more realistic but super negative way to think about SBOMs is it shifts the responsibility from the company that's producing a piece of software to the consumer of a piece of software to know what's in there and to make demands about it. That, unfortunately, is, I think, the overwhelming direction that SBOM is going. The consumers of SBOM, whether they be large financial institution, or whether they be a healthcare organisation, or whether they be a government, they are the ones that are very excited about SBOM. I think the idea of managing software development at the level of dependency management as an external party is horrifying.

Hopefully, the one that I have on the right, in this instance, I'll say the right side is the correct side. That one, hopefully, will end up becoming more significant. Companies will do the right thing in order to avoid having lots and lots of painful discussions with customers. We'll see whether that incentive structure sets up. Right now, it's like, okay, produce an SBOM. Then it's going to get ingested by some other automated tool set, and then the security team at your largest customer is going to ask a bunch of questions. That, I think, is a recipe for a mess.

[00:19:46] GUY PODJARNY: Yeah. Tell me if you agree with this, but another way to think about the positive lens of it is like the food ingredients, basically, that you buy in the supermarket. If it's poisonous, then it's still the responsibility of the provider, whoever made that foods to not send it to you, that they would be negligent or process some other law if they did that. If you're allergic to nuts and the thing has nuts in it, then it's okay to roll that responsibility, as long as it’s properly known on the consumer.

[00:20:16] ADRIAN LUDWIG: The metaphor, I think, is a good one. The question is whether the consumers of software from a SBOM standpoint, so which basically comes down from a security dependency management standpoint, are sufficiently different, that what is healthy for one is poisonous for another. I don't know that I buy that analogy, because that analogy is absolutely the case, if the consumers are different. I just don't think we are. I think we're all pretty much the same.

[00:20:43] GUY PODJARNY: I think that's a really fair point. I guess that's the question we need to learn to find out. This slips us indeed into that third bucket that I mentioned, which is as a consumer of software. I guess, with you at that seat now and you're also consuming software at Atlassian, some of those vendors started giving you SBOMs. What do you do with them?

[00:20:59] ADRIAN LUDWIG: Not a lot. Certainly, they exist. We could look at them. This is also, I think, where the market effects are super interesting. Any product that we're using at scale is one that's going to be validated by lots of other companies. Do I want to dedicate a bunch of my security organisation’s time and effort to making sure that everything is absolutely correct inside the SBOM of every one of our vendors? No. I don't think that is actually a sensible allocation of resources within our organisation. Do we have them? Yes. Do we look at them? Sure. Are we investing a ton of resources in optimising our analysis of them? No.

[00:21:37] GUY PODJARNY: I think that's a good answer. I will say that there hasn't really been anyone that I've spoken to yet that has a fully firm understanding of what they will do with them, maybe with the exception of, hey, if there was another log4shell, then at least they would know to go and query that list in whatever way it's stored and see if it's in there.

[00:21:56] ADRIAN LUDWIG: Yeah. You could do something with them. I mean, I think having the data is useful. It's hard data to understand and use in a reliable way.

[00:22:06] GUY PODJARNY: What are you going to do? You're going to come back to people, to a vendor and say, “Yeah, you have this medium vulnerability.” I guess that's precisely the behaviour that you were critiquing a moment ago. It's like they’ll come back soon. Oh. No, you have this medium vulnerability over here. You need to fix it. Is that really what you want the vendor even to be spending their time on, versus building functionality that you need?

[00:22:24] ADRIAN LUDWIG: You probably don't have enough context to be able to analyse that.

[00:22:26] GUY PODJARNY: Just as a bit of a fun anecdote is one interesting use case for the SBOMs is cyber-insurance. To an extent, you can use the profiles of SBOMs, especially as they evolve over time to understand whether something is, basically, to do security practices of the organisation, which I thought to be interesting, but I don't think you're going to do that as a vendor. Maybe your cyber-insurance provider will start asking for those SBOMs to assess the price of your premium.

[00:22:50] ADRIAN LUDWIG: Yeah. I don't think we figured out the magic bullet yet for understanding the level of risk that exists inside of an organisation. But maybe the insurers will figure that out.

[00:22:58] GUY PODJARNY: You've already had a lot of healthy practices. When you look at more from a change perspective, you look maybe a couple years ago versus now, again, with that lens of software supply chain security, how have your practises changed? Which priorities have gone up that maybe a couple of years ago you would have kept a bit lower?

[00:23:16] ADRIAN LUDWIG: Definitely, investment in dependency management, regardless of whether there's known invulnerability has gone up. Investment in simplification of dependencies has gone up. I have a dream, not to – That dream is one where, as soon as a dependency is out of date, we immediately apply a patch that updates that dependency and push it. There's sufficient, robust A/B testing and validation within our dev pipeline that that can happen. If something breaks, it turns out, engineers are really, really good at fixing reliability issues, and they are very proactive about it.

Shifting the entire discussion of dependency management and supply chain management into one where we're willing to create outages, because we have sufficient confidence in our A/B testing and our automated rollout system that it happens infrequently. This way, we can just solve dependency management, all the toil related that can go away. That's the dream for me is we get to a point where all the libraries are up to date all the time. All the components are updated all the time.

[00:24:15] GUY PODJARNY: There was an interesting counterpoint, actually, with the malicious components right now run, like it still promotes the same automation on it, but it's just about precisely how quickly do automate? At Snyk, we actually ended up, our automated upgrade pull requests actually opened up some delay. 

Snyk actually tries to address this malicious component problem by delaying the upgrade, the proposed upgrade when you're using it to upgrade your dependencies for new versions, and only opening it up, I think, it's 30 days after or something like that, so that you are less susceptible to a malicious component. If that came along, the ecosystem would flush it out. If there is a vulnerability in the component, then the fix will be opened right away, because you do want to adopt those fixes as quickly as you can before the exploits happen.

A new version will come out fixing a vulnerability that has just been disclosed. For those fixes, you want to get them right away. Actually, it reminds me that the Snyk functionality, it has an exception asterisk with unless there is a vulnerability in it. If there's a vulnerability in it, it opens right away as a security fix. If there is a vulnerability in it, then it opens up.

[00:25:17] ADRIAN LUDWIG: I mean, what we've seen in terms of timeline from disclosure to exploitation is hours, a day or two. It's not days and it's not weeks. I mean, I think differentiating known vulnerability, prioritise that versus no known vulnerability. Then slow it down. I can see that. That makes some sense. That also probably gives you some coverage on reducing the amount of churn that you need. Only update once a week, unless there's a known vulnerability or something like that, probably. If everybody's on a non-synchronised schedule for doing that, then you probably get pretty good – You reduce the likelihood of someone being able to successfully roll out at scale and a modified version of a library.

[00:26:01] GUY PODJARNY: This is super useful. It's practises around, if you're leaning into supply chain security, what to prioritise and what to do next. Let's take a moment and look at the industry outside of Atlassian. You've been a member of the Open SSF board for the last bunch of months. A lot of activities, a lot of work happening over there. Which projects or activities do you think are currently the most important, maybe the most worthy of tracking for someone who's not living supply chain security?

[00:26:31] ADRIAN LUDWIG: Great question. I mean, it's this interesting space, where because there's no charter for open source, and there's no unifying organisation, etc., all of the things are missing in varying degrees across different parts of the organisation. There are certainly some parts of the open-source community where they have no notion of incident response. They have no ability to respond when a vulnerability gets identified. There are other parts where they have great incident response, but perhaps they have no training.

I think one thing that Open SSF has done is tried to encapsulate what are all of the potential things? Then let's begin making progress against them in parallel, which is definitely a broad approach. Hopefully, it can help in each of the different areas. The areas that I think have received the most attention are around flow of information and then prioritise review. How do you know which areas with SBOM, for example, which things are relevant to you? How to identify which libraries you have dependencies on. I think that's a pretty important one thing. Just understanding the pieces of code that you're using, what those dependencies are, and how they relate to one another. That's probably among the most important things that we're working on is trying to get that in place.

[00:27:42] GUY PODJARNY: When you think ahead, maybe a little bit less practical, a little bit more further out to the future, which projects do you think are the most aspirational, the most promising if this pulls off, if they pull this off, if we pull this off, that would really make a difference?

[00:27:56] ADRIAN LUDWIG: It's shocking to me that some of the basic exploitation mechanisms, buffer overflows, have been around for 50 years. Heck, I've been exploiting them for 30 years. Yet, we continue to write core parts of our technology in these languages that have problems and using methods that are known to have problems. I think longer-term transition away from known risky languages and methods to less risky rust, I think, is really attractive in that regard. 

It's odd to me that we prioritise performance so high and at the same time have layer upon layer upon layer upon layer of abstraction that just destroy performance. I think that as a long-term bet is going to be really, really valuable, getting clearer security contract baked into the languages and tools that people are using.

[00:28:48] GUY PODJARNY: Yeah. I think it’s – You’re indeed answering kind of my question about aspirational because it’s a bit hard to sort of figure out exactly how do you engage in these massive, critical, central projects and rewrite them in a different language. There's an ambition there to do that as one of the tracks.

Adrian, this has been great and a great journey into supply chain security, your views on it, and how you apply it in Atlassian, to sort of grow that head. Maybe one, my regular open-ended question that I have at the end of it. Maybe, since we focused on supply chain security so much here, narrow into that world, if there was – If you have unlimited budget, unlimited resources to tackle one problem in this world, it is pretty big, of a software supply chain security, what would you tackle? Maybe if you have thoughts, which path would you take to tackling it?

[00:29:36] ADRIAN LUDWIG: As we've seen it so far, we tend to think of ingestion of open-source projects as just a win. What we lose track of is that there's a long-term management cost associated with that. The more we can automate that management, the better. My world right now, the way that looks is getting to the point where having the most recent, most up to date version is the default. Whereas right now, the default is you have the version that you ingested 10 years ago, or 6 years ago, or 5 – Depending on how old your organisation is, at some point, somebody pulled in that version. It's just there, hidden in your repos somewhere, and you didn't know it was there, until you run a security scanner, and it's like, “Whoa, that's scary.” Then you're like, “How do I undo neglect of N years and get it to the point where it’s up to date?” I think the default needs to be it's up to date.

[00:30:29] GUY PODJARNY: Is that aligned with the serverless ethos and sort of a platform as a service model, in which you consume it and run and something swaps it? Or you're envisioning a software version of it, where even if you're hosting it in your own setup.

[00:30:41] ADRIAN LUDWIG: I think it's going to depend on your specific environment. It could be serverless, right? It could be that it's in the images, and then you don't actually invoke anything directly. That could work. I think the reality is most places that I've interacted with are also ingesting a lot of things as source. Figuring out how to manage it as source is also probably important. Regardless, we have to flip the model so that the default is that it's up to date, as opposed the default is that it's not up to date.

[00:31:08] GUY PODJARNY: Now I’m curious about the second. What’s that? What's the second sort of ambition?

[00:31:12] ADRIAN LUDWIG: Legitimate security boundaries between those components. You're able to have confidence that any exploitation is isolated and contained only within those areas. Maybe, again, maybe that's base image, and the base images are actually isolating them out, I don't know. In a serverless model, maybe it's something else. That level of isolation would be amazing. If you could both know that your components are up to date and that any compromise of them is contained within just that component.

[00:31:40] GUY PODJARNY: Indeed. Indeed. This has been great, Adrian. Thanks for coming onto the show again.

[00:31:44] ADRIAN LUDWIG: Happy to be here. Yeah. This was fun.

[00:31:46] GUY PODJARNY: Thanks, everybody, for tuning in. I hope you join us for the next one.

[END OF INTERVIEW]

[00:31:54] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode.

Bye for now.

[END]