Episode 27

Season 4, Episode 27

Open Source Security With Jeff McAffer

Guests:
Jeff McAffer
Listen on Apple PodcastsListen on Spotify Podcasts

Jeff McAffer: “You might have noticed that Microsoft is changing its views on open source. We've gotten from using dozens of pieces of open source a few years ago to literally, using hundreds of thousands of different things across millions of different usage points within the company. Just because you're using something that has a vulnerability doesn't mean you're subject to that vulnerability. You typically have to use it in a vulnerable way. What's interesting is to look and say, ‘What better assessment information can we have? How can I tell as a user if I am vulnerable? What are the characteristic usages that are vulnerable?’”

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. It is a part of the Secure Developer community. Check out thesecuredeveloper.com for great talks and content about developer security, and to ask questions, and share your knowledge. The Secure Developer is brought to you by Heavybit, a program dedicated to helping start-ups take their developer products to market. For more information, visit heavybit.com.

[EPISODE]

[0:01:09] Guy Podjarny: Hello everybody, welcome back to The Secure Developer. Today, I have Jeff McAffer from Microsoft with me on the show. Welcome, Jeff.

[0:01:14] Jeff McAffer: Thanks for having me.

[0:01:17] Guy Podjarny: Thanks for coming on the show.

[0:01:18] Jeff McAffer: Sure.

[0:01:18] Guy Podjarny: Jeff, before we dig in, can you tell us a little bit about yourself? What do you do, a little bit of history of how you got there?

[0:01:23] Jeff McAffer: Yes. My role at Microsoft is I run the open-source programs office. We help drive policy, and process, and tools, and the culture change across the company. You might have noticed that Microsoft is changing its views on open source. We try to help mature our viewpoints on open source and make the practices easy and smooth, because there's a lot to do when you're using open source. It's not free. You've got to do work to do it well, both our releasing and our consuming of open source.

That's what I've been doing for the last four years or so. Historically, I did a bunch of stuff in open source. I was one of the original guys on Eclipse, and I did a bunch of work in that space and spent some other time at Microsoft doing a few different things. But that's been my recent past, it's all been driving the open-source program.

[0:02:08] Guy Podjarny: Does that qualify as going to the dark side? Going from doing open source to managing open source?

[0:02:13] Jeff McAffer: I used to work for IBM doing Eclipse work. Of course, when I ended up leaving and joining Microsoft, because this was seven years ago, that was the dark ages for Microsoft and open source.

[0:02:22] Guy Podjarny: Yes. Pre-Microsoft culture shift.

[0:02:26] Jeff McAffer: It was an interesting change, but it's a super exciting place to be right now with all the changes. There's a lot of stuff happening and a lot of evolution happening right now too.

[0:02:35] Guy Podjarny: Cool. That's cool. What you do today is you manage these open-source programs inside of it, maybe let's double-click a little bit on how – that’s a Microsoft analogy there. Into, what does that mean? Do you have a team there? How does that work, managing –

[0:02:52] Jeff McAffer: We've got a modest-sized team. About a dozen people that are in my team directly, and we do everything from looking through the legal policies. When we started this office a few years ago, a developer at Microsoft had to answer 20 skill-testing questions if they wanted to use a piece of open source. It was really prohibitive. We went through it with our legal partners, reviewed everything, all the policies and whatnot and just trimmed it down.

To make a long story short, we've gone from using dozens of pieces of open source a few years ago, to literally using hundreds of thousands of different things across millions of different usage points within the company. We have to track all of that, and understand all the licence compliance issues, comply with all the licences, understand all the vulnerabilities, and try to make our devs aware of that, and track where they've gone and everything like that. That's one of the big challenges we've been facing in the last few years, and we do that in partnership with the security team, the legal team, and the product teams across the company.

[0:03:52] Guy Podjarny: What does it look like today? If a developer wants to consume a piece of open source in Microsoft, what happens?

[0:03:59] Jeff McAffer: We have streamlined that to the extreme, and our mantra is, “Eliminate, automate, delegate.” First, we eliminate any policies or questions or friction that we can find, if we can. Work really hard to understand exactly what's the risks, or whether the opportunities and the trade-offs, and if we can eliminate any of the challenges there we just get rid of it. We write policies now that are highly automatable.

So, you can write policies that say, “Jane has to review everything,” and that's not automatable because there's not so many Janes. But we write policies that are highly automatable and can take in data and context and spit out an answer. That's our automate phase. Then there's some things that when you get to a certain point, there's a risk that we're unsure of, or whatever and you need to pop out and a human need to – a lawyer needs to engage or a business person or something.

We've gotten to the point now where with integrations into our build systems, and these automated policies, and good data. We're at the point where about 99% or 99.7% of our open source usages are automatically detected and automatically flow through our policy engines with no humans whatsoever.

[0:05:05] Guy Podjarny: So, let’s say, if I'm a developer and I wanted to use whatever, some NPM package, or some NuGet package, I just do it?

[0:05:10] Jeff McAffer: Just do it.

[0:05:10] Guy Podjarny: Then the build system will scrutinise whatever –

[0:05:11] Jeff McAffer: The build system figures it out and detects it, and runs it through our policy engine, all in real-time, I'll say. What comes out of it, we've moved away from a request and approve model, because that is pessimistic. To more of a register and review model. It just gets registered. We take note of the fact that you've got it, that you're using that open source, what version, et cetera, and try to figure out the scenario. What product is it going into? Is it a dev dependency? All that stuff. Then, run it through the policy engine. It comes out and says either, you need to answer some more questions and get a review or you're good to go.

[0:05:49] Guy Podjarny: Would it break the build? If it didn't get a full bill of health, would I find out because you e-mailed me out of band? Or is it like, my system will stop working?

[0:05:58] Jeff McAffer: There's lots of different dimensions that teams can dial the knob where they want. We can technically break the build, typically we don't. We couldn't do that in a central way, because teams are all different points in their shipping, and some risks are tolerable, and all that sort of thing. We don't want to have a vulnerability come into our database and suddenly everybody's build breaks, and they can't ship anything, that's really disruptive.

We tend not to do that. You get in-experience warnings. Build warnings, build errors, that sort of thing. You also get alerts in the services that we offer, like in our git services. But we've also shifted left, if you will, all the way to the point where you get the warnings in VS Code.
If you're taking a dependency on something that has a vulnerability you get little, red squigglies in VS Code that tell you about that. But we've also gone further left and into the browser where when you're browsing NPM or NPMjs.com, or NuGet.org, or whatnot, you get a big red box if there's a vulnerability on that thing, and that's a Microsoft-specific dataset that's feeding that and will give you information about vulnerabilities or licence issues or whatnot, so that right when you're choosing the component, you can choose wisely.

[0:07:11] Guy Podjarny: This is like an extension?

[0:07:13] Jeff McAffer: Yes. A browser extension.

[0:07:14] Guy Podjarny: It's something that's just installed by default on all –

[0:07:16] Jeff McAffer: It's optional right now, but it's available to everybody and it's got pretty wide usage.

[0:07:22] Guy Podjarny: It sounds like you've done this combination of, you've built a lot, right? You embedded a lot into those processes. What's your criteria around, you have a bunch of these tooling, when do you choose to invest your own development resources and build those components versus taking off the shelf software?

[0:07:38] Guy Podjarny: That's a really good question. If you pan back four or five years ago when we first started down this path, we were starting from a traditional workflow that was almost inherently point-and-click for users. There was no way that it was going to scale and there wasn't much available at the time that was going to scale.

We found the combination of that and a bunch of the quirkiness-es of the Microsoft code bases and engineering systems that there weren't a lot of tools that we would be able to integrate with. They simply weren't available. We headed down a path of building pretty much everything.

Could we have done that differently? Maybe, I'm not sure. There's been a lot of good advances in tools out there now, and some of the things that we've got internally we are thinking, could we make that a product? Should we make that a product? We'll see how that unfolds. Or in many of the things we're doing, we open source as well. In terms of my team, we've got a number of different elements out there that we've actually just made open source because they're not going to ever be a product, per se.

[0:08:39] Guy Podjarny: There's something, there's some good karma element of it when you're consuming. Generally speaking, you shouldn't be open-sourcing stuff, but specifically you would deal with the consumption of open source. There's something extra right about open sourcing.

[0:08:51] Jeff McAffer: The irony of the open-source programs office not open-sourcing things was not lost on us.

[0:08:55] Guy Podjarny: Indeed.

[0:08:56] Jeff McAffer: So, we try to open source as much as we can, but some things we can't.

[0:09:00] Guy Podjarny: Let's dig into that a little bit. Let's talk. You teed up, you said there's a bunch of these deals that you have or that you are open-sourcing. Do you want to talk about some specific deals?

[0:09:08] Jeff McAffer: Most of those are not security-related, but things like managing our presence on GitHub. We've got tens of thousands of developers and repos on GitHub, and trying to cross a hundred orgs, and stuff like that. Managing all of that is a real challenge. Trying to keep all the cats in line and take care of all of that. Everything from that to also monitoring GitHub. We've written a few things that harvest data from the APIs and give us a good perspective on what our devs are doing and what the community is doing, how the projects are working, all those sorts of things.

A lot of stuff in that space is where we've been driving. I've got a project that we started called ClearlyDefined that is trying to crowdsource licenced metadata, because that turns out to be a real big problem, and there's potentially some security angles on that.

[0:09:55] Guy Podjarny: We can dig into that, in general around community, because we're going to shift to that a bit later on. But let's talk about the – this is at the end of the day, we're talking about The Secure Developer, into the vulnerability handling aspect of the process. Walk us through a little bit of the system, and you have these components, what happens when a new vulnerability gets disclosed? What's the alarm bell?

[0:10:15] Jeff McAffer: In something we're already using, you mean?

[0:10:18] Guy Podjarny: Exactly. There's some new stress vulnerability, or the equivalent? That stresses our poster boy these days for something getting vulnerable. What happens next?

[0:10:26] Jeff McAffer: There's two scenarios that we see there. One is it's in something that's shipped already and isn't being built again. It shipped last month or whatever, and maybe the team has moved on to a new branch or a next release or something. Then there's stuff that's actively being developed.

Most of our tooling is integrated at the build time. I mentioned the shift left stuff, so for people who were browsing and looking for things, that's when you're doing active development. That's an idea that will prevent you from getting the vulnerabilities into your code in the first place.
But in this case, we've already taken a dependency on Foo version 1, then some new vulnerability in Foo has been discovered. So, if it's being built, we'll get the alert either way, because the builds has it's already happened, and we know that Foo version 1 is there.

[0:11:09] Guy Podjarny: So, for starters you track that, bill of materials –

[0:11:10] Jeff McAffer: As soon as we see it in the build, we track that out in a system and it's tracking millions of different use sites across the company's code base. As soon as we see that, we end up with an alert being raised in the engineering system. By alert there, I mean that's our user visible banner across their UI if they're going and looking at the website in Azure DevOps. Most of what we do is in Azure DevOps, because go figure, we sell these products. So, we use them too.

[0:11:35] Guy Podjarny: Indeed.

[0:11:38] Jeff McAffer: It's integrated into Azure dev ops, but we also have facilities for
emailing and getting reports, and you as an individual, can go to this dashboard and it's personalised to every individual that goes to see, “What are all of the vulnerabilities in any repo that I'm responsible for?” You could just go and click on that link, and it shows you, “There are seven vulnerabilities with these criticalities, or severities, and here's the repos they're in.” That sort of thing.

[0:12:02] Guy Podjarny: rom an ownership perspective, whose responsibility is that? Is it on the dev team to go, you provide into that portal, it's their responsibility? Is it the security team that's trying to push it? Is it yourself?

[0:12:15] Jeff McAffer: That's a good question. I skipped over that part. When we get a new vulnerability, it depends on the severity. All that happens regardless. You get the alerts, and all that sort of thing. If it's a high-severity vulnerability, then we have a whole part of the company that exists independent of open source. The Microsoft Response Center, the Security Response Center, and they will engage on the high severity things too. Severity has a number of different dimensions, some particular to the actual vulnerabilities, some to the business case, and what product is it? And stuff like that.

But they will engage in a higher severity levels and drive a whole incident response process where they've got hotlines set up, and we're figuring out which customers have it, or which data centres have it. All that muscle gets invoked pretty much, I won't say automatically, but that's a very practised process at the company.

If it doesn't fall into that category, then we do have a set of standards in our development process that talk about vulnerabilities and an SLA around them being fixed. It does go back to the dev teams. They are made aware through the notifications and alerts and whatnot that they have these vulnerabilities, then they have a time period to address them, and various dashboarding and reporting things help them stay in the SLA.

[0:13:33] Guy Podjarny: The SLAs or not. Cool. The different levels are, one set of security components are about active development, when you add a component that has some security problem or licence problem for that matter, then it would it would flag. You'd get notified, you'd engage, that's when you added it. Second is when a new vulnerability gets disclosed, if it's a high severity, it goes to the Security Incident Response Team and they make a determination based on whatever information available to them. If they sound the alarm or not, and the health element of doing that as a long-term bid comes down to that SLA definition.

[0:14:10] Jeff McAffer: Sure. Like I say, for the lower severity things, that's just business as usual for the development teams there. We've always – independent again of open source, and that they've always had this heartbeat or drumbeat of there's a vulnerability in something you're shipping or building, you've got to go and deal with it. It's almost business as usual. It's just that the volume goes up because we're using so much other code that the team themselves didn't write.

[0:14:32] Guy Podjarny: You own more code than you're able to write.

[0:14:35] Jeff McAffer: Exactly. Absolutely. That's the wonder of open source.

[0:14:38] Guy Podjarny: Indeed. Cool. Let's shift gears a little bit. Thanks for that. This outlines the way that you manage, and you control open source. You deal a lot, you open source yourself, and you work a lot with the providers, and with the maintainers, with people that are writing open source themselves. How do you find those? Are there projects that are more happy to use or not? What makes you happy in an open-source project that you see?

[0:15:05] Guy Podjarny: It's really interesting. It varies quite a bit from ecosystem to ecosystem, and there are certainly ecosystems where there's more trust because they've shown historically that they're more attuned to security issues. We feel more confident about that.
Generally speaking when somebody is choosing to use an open source component, we look at individuals, devs at their desktop and go and look at various aspects of it, and we hope we try to guide them to look at security things, security-related topics. To hopefully, a large extent they do. But the interesting thing, the producers – we produce a lot of open source ourselves, so we're in the same boat.

There's some simple things that help folks understand what's going on from a security point of view, and it can be anything as simple as having a clear statement about how to report security issues. That signals a bunch of things. One, obviously it tells you how to report security issues, but the other is it tells you that this project thinks about security. They understand that security is a topic. That they have put in place a process and it might be through their umbrella foundation. They might have done it themselves, on their own. Either way, I feel more confident now as a consumer and somebody who is looking to engage with that project, that I can talk about security issues with them, that I have the means of doing that and that they're receptive to it.

[0:16:25] Guy Podjarny: Yes. We've observed in our state of open source security, not from this year, but from the one last year that we've done and we asked about, “Do you have a disclosure policy?” The statistics were very clear to show that if you have a disclosure policy, then you're more likely to get reports. It sounds obvious but it's statistically verified that you will get more reports because you're guiding people about where to go. You're right around the security consciousness. It's like you've taken a moment to think about security.

[0:16:53] Jeff McAffer: An interesting full disclosure to our listeners here, Microsoft has not done a great job in that regard. A lot of our repos don't have these kinds of disclosures on them, and that's one of the things that we're working on in the next few months.

[0:17:06] Guy Podjarny: Well, it keeps you busy.

[0:17:07] Jeff McAffer: Getting all that stuff into shape and really clarifying for people how to engage.

[0:17:11] Guy Podjarny: I love it. A lot of the Apache projects I've observed have a good security.md file there –

[0:17:15] Jeff McAffer: Yes. Apache, Eclipse, Linux foundation. There's a lot of good high-quality projects out there that are attuned to security issues. One of the other things we found that's interesting is when you get these vulnerability reports in, oftentimes these days it's easy to get a thousand NPMs on your machine or in some Docker image or something like that. It's easy to consume tons and tons of open source, and the dependency graphs get really deep. It's easy to have a vulnerability in something that's 10 levels deep in your dependency graph, and you've never heard of it because you never used this thing at the top and it brought in this thing at the bottom.

One of the things that becomes interesting is a lot of these vulnerabilities are things like DOS vulnerabilities and that sort of stuff. Whether you're vulnerable to it or not is something you need to go and look at. There's a couple of things there. One is if you understand a little bit about the architecture of the project, if projects help people understand that, you can understand simple statements like, “We don't take any regular expressions from outside. Our APIs have no regular expression surface area.” Then that whole class of vulnerabilities now is immaterial in general to that project.

You can know just because you're using something that has a vulnerability, it doesn't mean you're subject to that vulnerability. You typically have to use it in a vulnerable way. There's almost no packages that we blacklist, that are just outright bad. It's only the ones with malware or something that are outright bad. They're all based on how you use it. Projects having a little bit of a security-oriented architecture discussion is super useful, because as a consumer and somebody is looking to engage in a project, I know how data is being treated, how code is being executed. I can come and help find vulnerabilities. I can be more confident in my use of it, et cetera.

[0:19:06] Guy Podjarny: At Snyk, Snyk is free for open source and all that, sort of mix at the flag, but we have the ability to put a badge to say whether your dependencies are vulnerable, and it always felt almost counter-intuitive for people to do it. We put it out there and weren't sure if people were going to do it or not. At this point there are many thousands of repositories that put this badge that says how many vulnerabilities they have, and it feels almost like, “Hey, hold on. Why are you advertising?” But really what they're saying is, “We aim for that to be green,” and they can manage their vulnerabilities and state as much in the repository.

Presumably when you consume such an open-source project then you're able to say, “Okay They have a bunch of these dependencies, but to the extent of their ability to assess whether they're vulnerable or not, they've stated that they're not.” They've accepted their vulnerability, but it's better off than being on your own.

[0:19:50] Jeff McAffer: It comes back to that thing we were talking about a little bit earlier, where if you have a description of your security policy, it means that they think about security. Already, you've already got half marks, right? It's an awesome statement way above. Most of the other projects out there, that you're already ahead of the game by doing that. That's super useful.

[0:20:07] Guy Podjarny: Cool. Those are two. Are there others that are practices? Like security disclosure, inform on the way that you consume input? What risk factors might apply to you?

[0:20:18] Jeff McAffer: Of course, proactively reporting your vulnerabilities. My understanding – I don't have the concrete stats. But my understanding is that the majority of vulnerabilities out there today are not in the central databases, that sort of thing. They exist in some issue or pull request and maybe they don't need to be called out explicitly, it's just the dev wouldn't fix the problem.

Actually, reporting those things, and whether it's through the standard CVEs and whatnot or through some other way, surfacing the fact that you had a vulnerability, doing it responsibly and respectful. But surfacing that there was a vulnerability in some version, it's now fixed in some new version. It’s also something that clearly helps with the problem.

Back to our earlier discussion about dependencies and the badges and that sort of thing, actually, going and proactively knowing what your complete chain is. Like, “What's your user's viewpoint?” As a project producing a component, people are going to go get it, they're going to do NPM install or whatever the verb is, and they're going to get 20, 30, 100 other things. Understanding the shape of that from a security point of view, from a licencing point of view, there's this term passive carrier or something that they use it in diseases. It's asymptomatic carrier. That's the term. Where you carry the disease but you're not actually – so you could carry a vulnerability or a licencing issue, but not in yourself. You're just going to subject all of your users to it.

So, understanding what your users are going to get when they use you and what the security status of those projects are, a lot of times when we come across these deep vulnerabilities in deep dependencies it becomes hard for somebody to fix it. You might be able to go and get a new version of the thing that's vulnerable, but the thing that's consuming the thing that's vulnerable, you need to up it. Then it's a new version and you need to up it, and you need to walk all the way up the chain or do some patching down to the low level, and there's some cool tools that do that.

[0:22:13] Guy Podjarny: We try to help in that space. But I agree with you, this notion of own it. It almost comes back to ownership, as an open-source maintainer, you chose to use some open-source components, you need to show some modicum of ownership for those components and understanding that you need to be tracking them and reporting them. Because for all intents and purposes, you are distributing that code, and you don't want to be distributing vulnerabilities.

[0:22:37] Jeff McAffer: By no means am I trying to offload all of the work onto the project team. We want to be able to come as a large consumer of open source. We want to be able to engage and help teams become more and more secure. You signalling that you're willing to do that is a good sign that we're going to come to help with that. It's an engagement. It's a bi-directional collaboration.

[0:22:59] Guy Podjarny: All of those are great components and I do think that there is more awareness. I would bet, that there are more security.mds today than there were 5 years ago or 10 years ago. So, the conversation is there. What other means? You mentioned ClearlyDefined and those projects. Let's maybe talk a little bit about maybe a more structured element of contributing or sharing such knowledge.

[0:23:23] Jeff McAffer: I mentioned ClearlyDefined earlier –

[0:23:26] Guy Podjarny: Yes. Maybe give us the general overview on what this is?

[0:23:28] Jeff McAffer: The current focus is on crowdsourcing licence data. The current focus is not on security. But the general premise there that we've got tools and capabilities that we can run and do automated work on open-source components. In this case, signing licences and copyright holders and whatnot, and put that out there for people to consume. Right now, it's hard for people to – just like it's hard to figure out what vulnerabilities there are in a component, it's hard to figure out what licences, the copyright holders, et cetera there. So, to comply with the licences, that's hard.

We've tried to automate and put a bunch of tooling in place. The tooling is not perfect because humans are humans and tools are tools, and we don't always get all the data but we make that available for curation. So, people can come and update the values, if the licence is wrong, they can come and fix it. That gets reviewed like any open source contribution and subsequently gets merged into the definitions of open source components, and hopefully up streamed to the original components so that future versions are more ClearlyDefined in the way we say it.

[0:24:26] Guy Podjarny: That is done by the maintainers of ClearlyDefined?

[0:24:29] Jeff McAffer: There's a curator community that you can come and go to. You can go to ClearlyDefined.io and see a component that you know and like, or maybe you're the owner of that component, and see that, “Crap, in version 1.3, we forgot to put the licence in the package file.” So, you can go and fix it there and submit that as a pull request and it's all automated. So, it's nicely done. Then the curator community will say, “That's really cool. We'll merge that in.” And it becomes part of the corpus of data, then we went upstream that back to the original project so that version 1.4, when the next one comes out, is more clearly defined.

Taking that, that's all licencing related. Taking that and trying to apply it in the security world, we have this notion of clearly secure, and it's very nascent. I'd love for your listeners to help us figure out what that could or should be. By all means, come to the site and there's a – you can join the Google group or whatever and send us information about it –

[0:25:25] Guy Podjarny: This is ClearlyDefined.io.

[0:25:28] Jeff McAffer: Exactly. ClearlyDefined.io. But what we're thinking so far, and again very nascent, it's simple things. Many of the people we work with including ourselves have developed mappings from component identities to the CVE or CPE identity that's in the database. It's not obvious to everybody, but it's not as easy as going, saying, “I want to go to the database and see if Foo version 1 has a vulnerability.” You actually have to do work to figure that out.

A lot of people have independently developed these mappings, so why are we doing this independently? We're all in open-source. Let's collaborate on that and have a central place where we can develop these things. Then, of course, upstreaming that, if you will, back into the databases and helping work with the database communities to make the data at source better. But then, there are other things, like we talked about the underreporting of vulnerabilities. How can we make it easier for projects to report in a very simple way? You could imagine the hash-vulnerability tag that gets put into your issue, or something that when you commit that pull request, that it's automatically hoovered up by ClearlyDefined and put into a database. So now, you can subscribe to that feed of vulnerabilities.

We do not want it to be a new vulnerability store. We've got enough of those and we just want to make it easier for people to use and manage and integrate with the data that's out there, as much as possible. There's a bunch of other ideas, but I'd really love to hear from your listeners as to what they might interesting.

[0:27:01] Guy Podjarny: That's a great call for action. For those listening. Got some homework there. Go to ClearlyDefined.io. One of the challenges you're going to have with secure versus define might be the sensitivity of the data, because there's the vulnerability store but there's also the – if you're going to make a security conscious statement, or security impacting statement about a project for licence, there's no sensitivity aspects to it for vulnerability.

For instance, what you wouldn't want is for somebody to come along and say, “Hey, I found this vulnerability over here in this project. Let's just add it to the list,” when it hasn't been properly disclosed. But all that said, it sounds awesome. It sounds like there's definitely a lot of, at the very least, around the metadata and the curation, but maybe even the security properties around what inputs come along. All of those can very much be crowd-sourced.

[0:27:48] Jeff McAffer: The other area that's super interesting is more like assessment information. A lot of the vulnerabilities that we see, they come in and they've been produced by security researchers. They're great. They're super detailed, but it's like, “Line 47 of file Foo.js, or whatever, has this construct and it's going to cause this problem, and it's very detailed.” From a consumer point of view, again, if you're like 10 levels up from that component, you've already lost them at the line number or whatever. What's interesting is to look and say, “What better assessment information can we have? How can I tell, as a user, if I am vulnerable? What are the characteristic usages that are vulnerable, and not just, ‘This is a DOS attack.’” It's like, if you call this function with a third argument being this way, then I can easily take that and go and ideally write some tools to do that or get some tools. Even if I have to do a manual inspection, that's much easier to do than then trying to dissect it.

[0:28:47] Guy Podjarny: There's an interesting question there, around the crowd-sourcing bit of some of these things versus the technology bit. Should that information be gleaned by crowdsourcing the community, or should that information be gleaned through runtime observation of data or the likes? Maybe even then, there's a crowdsourcing element of contributing that data and making it available.

[0:29:06] Jeff McAffer: Sure. We have absolutely no desire to get a bunch of people to do things. If we can tool it, back to my mantra before, about automate, right? If you can eliminate, automate, delegate is the last one, and that’s where humans get involved. If you could do this in any automated way, seek to improve the tools, that's maybe another thing that happens in Clearly Secure, is people helping to put together – I don't imagine it becoming a place where we develop security tools, but security data aggregation tools or something that could be useful and interesting.

[0:29:38] Guy Podjarny: eff, this has been a great conversation. Before I let you off over here, I like to ask every guest that comes on the show one last question, which is if you have one bit of advice or a pet peeve, or something you want to tell a team that is looking to level up their security expertise or their security posture, what's the one thing you would advise that team to do?

[0:30:01] Jeff McAffer: There is lots of different angles on that. I'm going to take the angle of the group of people consuming open source and it's really to engage. A lot of people still think, “It's open source. I'll just take it, I'll use it, and that's it.” But you really have to treat it like it's your code. Even if you're not going to write any, even if you don't know the language, you have to treat it like it's part of your system and you do need to care about the security of it. You do need to engage with the producing teams, the project teams, and say, “How can we make this together? How can we make this a secure project so that we can all consume it in a secure way?” I think, there's just not enough people doing that. We see that across the board, basically. It would be a lot more sustainable and a lot more secure if more people were more deeply engaging with the projects that they consume.

[0:30:51] Guy Podjarny: That's an excellent tip. That's great. Jeff, thanks a lot for coming on the show.

[0:30:54] Jeff McAffer: Sure. Thank you.

[0:30:56] Guy Podjarny: Thanks everybody for tuning in. Join us for the next one.

[OUTRO]

[0:31:01] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show, or get involved in this community, find us at the thesecuredeveloper.com, or on Twitter, @thesecuredev. Visit heavybit.com to find additional episodes, full transcriptions, and other great podcasts. See you next time.

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon