Skip to main content
Episode 123

Season 7, Episode 123

Malicious Packages And Malicious Intent With Liran Tal

Guests:
Listen on Apple PodcastsListen on Spotify Podcasts

Malicious attacks are a real threat, especially with the essential role of open source in mind. Today’s guest, Liran Tal, is  the director of developer advocacy at Snyk and. Github Star, and he is here to share a plethora of tips you can implement today to see a marked improvement in general posture and company safety. 

Tune in to hear Liran’s perspective on the state of malicious attacks today in comparison to previous years, how third-party dependencies can be problematic, and how a single attack can impact thousands of users, developers and CI machines. He believes that open source is an essential tool today and that the solution lies in better security. Listeners will also learn how security sanitization is different for each ecosystem, and hear some advice for security-conscious companies cautious not to restrict innovation by tightening up their security plan. Join us to hear all this and more from today’s expert voice from Snyk. 

Share

EPISODE 123

[INTRODUCTION]

[00:00:23] ANNOUNCER: Hi. You're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers both secure applications without slowing down, fixing vulnerabilities in code, open-source, containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's S-N-Y-K.I-O/T-S-D.

"LIRAN TAL: As we said before, like avoid blind updates, like a lot of developers just blindly update to like the latest versions. So at this point, like with the tooling that we have in the ecosystem, you shouldn't do that, like there are enough automated systems, bots, Snyk included, as well as others to basically go ahead and allow you to have an automated dependency upgrade procedure with a pull request, you can review it."

[INTERVIEW]

[00:01:18] SIMON MAPLE: Hello, and welcome to this episode of The Secure Developer. My name is Simon Maple, and joining me today is Liran Tal. Welcome, Liran.

[00:01:26] LIRAN TAL: Hey. How are you doing, Simon?

[00:01:28] SIMON MAPLE: I'm doing very well. Thank you. Liran, you've been on The Secure Developer a couple of times in the past now, right?

[00:01:31] LIRAN TAL: Yeah, I think we've talked about a bunch of interesting topics around open-source security. So it's always fun coming back.

[00:01:37] SIMON MAPLE: Oh, absolutely. Well, welcome back. In a second, we'll do some introductions. But just to introduce today's topic, we're going to be talking heavily about malicious packages and malicious intent from people who are going out to attack third-party libraries and ultimately attack supply chain software. So we're going to be covering a number of different events that have occurred, why this is occurring, and what we can do about it going forward. Liran, why don't you give us a little bit of background about yourself, for those who didn't hear previous sessions? We'll give a refresher to those who did. A little bit about your background, what you do at Snyk, and perhaps, a little bit about your journey of getting into the software space and where you are today.

[00:02:16] LIRAN TAL: Yeah, sure. Sounds good. I'm a developer advocate at Snyk. I basically have a lot of interactions with developers, I build code, I do security research a bit, dive into a bunch of activities around open source for the JavaScript and the Node.js ecosystem, and essentially really trying to do my best essentially to secure the ecosystem and help developers build the things in a secure way. So Snyk has been a really cool way and place to do that.

[00:02:43] SIMON MAPLE: Awesome. Well, welcome to the session, and why don't we jump straight in? Malicious attacks, it's not something that's new, really. Over time, there have been a number of more recent high-profile incidents that have occurred, including Event Stream and many others. Is this something that you see as an increased threat, compared to how it was maybe 5, 10 years ago or perhaps, is the media just stronger on it now than it ever was before?

[00:03:09] LIRAN TAL: It's a good question. I think just chiming in on what this has been, increasing or not. I think it definitely has been. Even though everyone knows, they did social media, has getting more attention and all of this stuff. Still, I think we're very objectively across the board with many companies, vendors, and players, and open-source security are finding more, and more cases and security incidents relating to malicious packages, relating to overtaking maintainer accounts, compromising the supply chain of open-source packages, and libraries and numerous ways. I think that topic itself isn't really new. There's an interesting article called Reflections on Trusting Trust by this person, Ken Thompson from 50 years back or something. I think it was 1984. That's a good date for a couple of things. But yeah, I mean, Ken has like wrote this essay on just like how easy it is to essentially trust something. But the moment you give trust to a piece of technology, there are numerous ways of essentially someone to exploit the trust. 

Even though you may have like secure coding practices, and reviews of your own code. Well, there's a compiler that compiles the code, and there's a registry that hosts the compiler, or the code. There's like an entire supply chain, or like a chain of custody of code, and the way that it travels towards you at the end of it as a developer. That we're not thinking day to day of, and this notion, like dates back decades, and we're seeing a lot of that just hitting us now, with regards to the increase of open source. We're all adopting using open source, and so attackers are now exploiting that actively as well.

[00:04:57] SIMON MAPLE: It's interesting that when you talk about the fact that there's almost like this cascading effect of trust when we when we think about whether it's dependency graphs or software, as I think Ken Thompson mentioned in that essay about how things were – if it was an issue in something as early as Linux kernel or something like that, things would be built on that and things may have same backdoors or similar backdoors. It's really something that builds upon each other. When we talk about trust, I guess there's two areas of trust. There's the trust of a project, and that is, I guess the processes, and how trustworthy a project practice is, security practice is, et cetera, like that can be. But also, the trust of individuals. Talk us through a little bit about Event Stream. I feel like this was a really interesting, malicious attack, in the sense that it was – the maintainer likely did a lot of the right things. They were – ultimately, there was more of an issue around a malicious actor, that the maintainer trusted, and talk to us a little bit about this, because this was interesting the way how it was a third-party dependency that was, all the dependency that caused the issue.

[00:06:10] LIRAN TAL: Yeah, that's a really good, interesting story. I think, to my opinion, one of the most impactful significant surgical attacks that we've seen on NPM, as the JavaScript ecosystem has been hit with us. It's a story that dates back to 2018, and there are actually some interesting stories we can talk over from 2021 and '22 as well. But this one started off with – I mean, Event Stream is the name of a package, which is mostly indirectly used in a project, like an indirect transitive dependency. Me as a developer would probably not just go in like straight off install this to use. It's been on the axon for like a decade almost, like eight years. It's kind of being feature complete, it's very mature. The maintainer who was maintaining it, Dominic Tarr has been maintaining a bunch of like 400 or 500 other NPM packages, more open-source libraries in his spare time and so on. 

This actor, this open-source contributor came into the project around those dates of the end of – I think it was August 2018 and was like, "I want to help. I want to write issues. I want to collaborate on the project." So, indeed, they did, and they had added code, and they added features and so on. Dominic was like, just like you said, this is just open source, regular open-source activity staff collaboration, kind of like gave them the keys to the car, essentially. The moment that they had gained that trust, this attacker that was coming from the outside externally and participating in the project, they were given like merge access to the code, and releasing new packages and stuff like that. What they actually did is kind of socially engineered your way into gaining trust from the maintainer. When they did, they were able to like now get code added. So they created a new dependency that now they fully control, added that to the project. That of course added some malicious packages. You remember that one, I guess.

[00:08:12] SIMON MAPLE: Yeah. I do remember that one. I think, yeah, this was an attack on crypto wallet, and another one happened very, very similar soon after, like a year after. They're extremely hard to identify. I mean, in terms of – I guess, in terms of the maintainer.

[00:08:25] LIRAN TAL: It was really impactful too. That library was downloaded millions of times a week.

[00:08:31] SIMON MAPLE: Yeah. In hindsight, what could have been done to fix. If you look at it from the maintainer's point of view, if you look at it from a consumer's point of view, what could we have done to avoid that kind of situation to avoid this happening in future as well?

[00:08:42] LIRAN TAL: I think there are several security controls that you can employ. Indeed, they very much vary in like, are you the maintainer of the project or are you the consumer, end user kind of persona. As developers, we have still a lot of tools for us. For example, don't rush to get new versions of packages. I have some friends who just run NPM install, like update everything to latest on CI. They're like, "Yeah, if it fails there, then it fails in CI. I'll find all the issues of compatibility and whatever on CI rather than locally whatever, but they would have been a victim of this and other attacks on your CI where you have no access to expose secrets, and API keys and whatever. That is still not the most security hygiene way of doing it. You could install packages with not allowing them to run and execute commands, which is a feature that NPM and – by the way, a bunch of other package manager has, will be and others. That is one malicious package aspect of it.

If we go back to like not rushing versions, so you could wait for a bit. Actually, that's what we've – due to Event Stream and other recent incidents at that time, we have built that into Snyk's package version updates mechanism. You get automatic updates from the depender bots of the world, et cetera, to update your packages to the latest version. But Snyk specifically has a built-in kind of buffer time that doesn't just update you to the latest bleeding edge, because we want to give that new version time on the ecosystem, maybe there's like a functional bug that got escaped, and now, it’s like a major breaking change for the version. Maybe it's a malicious attempt, several things.

Most of the time, you really don't need to be on the bleeding edge anyway. There are some developer persona, kind of mitigation of controls that you could add. These are just a couple of them. As a maintainer, it's a lot about trust. Who do you give trust to, and it's a hard one to – it's hard to trap to fall into. We had that case – I think it was University of – I don't want to call it the wrong name. I think was the University of Minnesota with the hypocrites Linux commits, where they were participating, it was part of the Linux kernel development, user groups, et cetera. Then basically, this was all a scientific experiment to see if they can push through some bad commits that would add backdoor to Linux, the largest operating system, running shuttles, and spacecraft and whatever. That did went through. 

Trust is a hard thing and code is a very fluid thing that can easily get passed through. Other ways though that you could assess from a maintainer point of view is, just have a small dependency footprint, know who you're getting into the project in terms of – are they vetted, have they had prior successful, good, positive contribution elsewhere. You could vet their profile a bit. I mean, essentially, that is super important for projects that are key ecosystem projects that users download a lot. If something goes wrong during the impact radius, or you the blast radius is very, very high.

[00:11:57] SIMON MAPLE: When we look at some of the new, or rather, we've seen a number of more recent style attacks, which is, I think understood as protests where individuals are doing things intentionally to their own dependencies. It's not necessarily a malicious user coming in and injecting some code under the guise of it actually being good. This is an individual who is intentionally trying to break their own third-party library, which people are using left pad, obviously, as an example, and others. There are others as well. I think one recently, because of the recent war, and things like that as well, targeting certain geos. Is this another one where it's like – I think you mentioned just then, looking and looking at the individual's profile, checking out what else they are, they are doing and seeing. Because ultimately, if one malicious user damages or compromises one artefact, you're not necessarily going to trust anything else that they particularly produce. This is, I guess, trusted the individual, the maintainer.

[00:12:52] LIRAN TAL: Yeah. This is definitely a whole different spin to it. This is where essentially, the maintainer was the bad actor, turned rogue one day. If we kind of maybe pause and take a step back in terms of really talking here about ways of weaponizing open-source software. If you think about it from that, then we could start listing all of those ways, like typosquatting attacks have been with us for a while, compromising maintainer accounts, which by the way, took a comeback with expired email accounts domains. That's been a really interesting research recently. Malicious modules, which we just talked about, social engineering, dependency confusion attacks, which has been an interesting way of – we're compromising open-source software supply chain. But just recently indeed, one of those protestware   sort of attacks where maintainer accounts want to relate with the cause. So they kind of turn it from hacktivism into activism sort of stance.

Indeed, that one real example has been with a package that a lot of people were depending on, a couple of packages from the same maintainer back in January 2022. Not so long ago, what happened is, they had millions of downloads for their packages. This product was actually not even – like this case, isn't even on the war issue. This was like how do enterprises in large organisations actually kind of exploit or use a lot of open-source software, but don't contribute back. This was a repeating thing with them for a couple of years with these specific maintainers, or this individual maintainer. So they released a package called colours. They released a version of it, which had a denial-of-service attack, basically, kind of an infinite loop when the code is running and it prints a bunch of stuff into the screen, which is not something you want having running in your production CI or whatever, your end user tooling.

These colours package, essentially just providing colours utility for drawing stuff. It's last released before – like this last stable release was 140, which had 14 million downloads over seven days over a week, about two years ago. Then suddenly, under a day, they released this malicious version of it, which was known as the sabotaged version 1.4.1, so a patch version for it. It immediately had gained within that week about 100,000 downloads, so like 100k of users, developers, CI machines were impacted by this denial-of-service attack. Because colours and like the use of transitive libraries within a software package, and an application translates into a bunch of dependent – like tens of thousands of dependent packages of colours. It actually trickled into some really major projects like AWS' CDK, which is their cloud development kit for AWS. The prompt and for Microsoft's play right. There's a bunch of projects that were dependent on this, and we're trusting this, and it ended up really hurting them. This was one case of how maintainers sabotage a code for specific reasons. It really impacts all of us.

[00:16:11] SIMON MAPLE: Yeah. I think it's kind of quite scary as well to, I guess, when we think about our alliances, as an industry on open source is obviously an incredible – it has an incredibly positive impact on us, and allows us to achieve much, much more. But when we think about the reliance that we have on it, when things go wrong, it can affect huge swathes of our industry. There's no such thing as being too reliant on open source, whether it's – because of intentional issues like malicious issues, or through maybe other things, bugs or vulnerabilities gaining access to large numbers of projects. Have we gone too far down the rabbit hole with open source, or is there ways in which we can be more responsible with it?

[00:16:55] LIRAN TAL: I honestly don't think we can look back at this point for like, that train has shipped. We have way of adopting open source, like the extent where it's –it's an essential tooling. Developers, I can relate as a JavaScript developer, which we use open source NPM packages and some Java developers I know enjoy opening the IDE, doing file new, starting to write, obstructions and interfaces. We, JavaScript developers are much simpler beings. We start off with just skeleton projects, and we start off our projects with having an application middle where a framework that has, basically by default, bringing in tens of dependencies. This is how we get used to building projects, and applications, and software, by relying on a bunch of already existing other projects.

[00:17:48] SIMON MAPLE: Let's talk about ecosystems actually exist. That's an interesting one. A lot of the examples that we've given already are JavaScript examples. Are these kinds of issues heavily bars to particular types of ecosystems? I guess, what is the role that registries have to play? Because I think – while it's easy and great to be able to say, "Well, people need to be more responsible about what they bring in. Is there a responsibility on the various registries to be able to perform some kind of security sanitization, or looking at what is being committed, and do you see these differ from ecosystem to ecosystem?

[00:18:24] LIRAN TAL: Yes, I think we're seeing a specific ecosystem is getting targeted for malicious and other concerns. So some examples, I would go and put, for example, Java and .net as like the Maven repositories, and you get on one bucket. I think more slow-paced, more conservative, more perhaps responsible maintainers, or developers that they don't just install the bleeding edge. There's more of like, someone told me from a teammate. They used this for 10 years. They know that this library, this Maven project is vetted, and they use it. I think that's one of the buckets. The other one, which I think we're seeing much more of these security incidents, specifically around open-source security issues, is probably the Python PyPI JavaScript with NPM. Ruby with RubyGems, they have been a lot more of a victim. There are open-source supply chain security concerns. Somewhat the registries, but I think more the actual ecosystem itself, like the registry being polluted with more malicious packages, and the libraries themselves are being compromised and attacked.

Maybe some of that is due to – it's a very flexible way to just add a package into NPM. It's very easy to submit packages. There's potentially no scrutiny process. JavaScript is very – it's also very ubiquitous. It's on the front end, it's on the back end, like running everywhere. There's a bunch of plays. You could write GitHub actions, which is like a CI authoring thing with Javascript. It's very much apparent that if you're able to compromise an NPM package, its reach will probably be very, very high. I think for some of these reasons, also the research on those areas is a lot easier to pull off. I think also if you look at the amount of progress that those ecosystems had, due to being kind of the victim of a lot of attacks in this area has also been – there's a lot of progress, or NPM had introduced and require two-factor authentication in a progressive manner from the most 100 significant maintainers, then 500 of them, like trying to help them out. They have actively shipped out you, because they have a security fee method. There's a bunch of work that's being done there, to also help secure the registries themselves and be able to put close-knit on supply chain security concerns. It's just like every hacking sin or terminology, it's like a cat-and-mouse game. There's more research, more security incidents, out of that new features, and new security controls, gets prioritised, and born and created. I think those are the two kinds of players in terms of different ecosystems for languages.

[00:21:17] SIMON MAPLE: Yeah, it's very interesting, actually, because the ease that developers have in creating new libraries, and new packages. Like you say, you can just easily throw up anything that you want onto NPM. The ease of that is really also equally the greater the catalyst in terms of the actual innovation and things like that, that does go up. Because the more barriers you do put up, the harder it is for developers to want to actually put things in the open. I feel like sometimes when we think about in internally within enterprises and folks like that. There is the – you do get greater innovation, when you do allow developers to pretty much use what they want and unrestrict their innovation and their and what they can use. However, that isn't always the necessarily the best thing from a security point of view. You see, on the other hand, if you go to the extreme on the other side, you'll see some organisations that pretty much say that here's the repository of open-source libraries that you're allowed to use. This is an internal repository of listed libraries, and versions, and you can't go outside this box. There are obviously poor outcomes in terms of the flexibility in the innovation that your development teams will have from that. What would you provide as good advice to organisations that are very security conscious, but don't want to necessarily restrict the innovation that comes out of their development teams for building new apps.

[00:22:40] LIRAN TAL: In terms of open-source supply chain security, like adopting best practices.

[00:22:44] SIMON MAPLE: I guess, in terms of the usage, the consumption of open source, not restricting developers in an enterprise organisation. For example, by saying, you can only use the open-source libraries that we've listed in this internal repo. But still saying, making security a core concern for them and making sure that development teams use software, open-source software responsibly. What would you say kind of maybe some of the core ways of achieving that within an organisation?

[00:23:13] LIRAN TAL: Yeah. So I think, without putting specific blocking, like use this and that NPM packages that we've listed. I'd say you definitely want to raise awareness for security concerns to begin with. This is like half of the solution, 50% of it is essentially like, developers just go off and have this meme where I talk about NPM security talks, and I show this. Which is like, the person is about to install, run NPM install command and they're like, "Ugh!" He's sweaty, and like, should they do it, should I not do it. I totally get it, myself included. When I do NPM install, I'm frightened what's going to happen, what malicious packages is going to trickle in. That concern, you have to build that in as like a mindset first off, that we are consuming stranger's codes into it and d putting it in. That initial awareness of like, what could potentially go wrong with this is something that you have to set as a culture, right? What are we bringing in being more responsible for it. I think the moment that you do that, important and relatively easy questions to answer about get right. For example, if this is a new team is now trying to find out like a message broker kind of NPM package that they need, and they're looking for alternatives. They might go and say, "Hey, like we're aware of the security concerns here." 

Let's look at the maintainers. Who are the maintainers? Is there one or there are five? Too many is kind of bad, because then the blast radius could be – you have to compromise just like one out of the 20 that have access. One is also a hard thing. Have to find a balance for it. Does it have tests? Are they documented? Who are their contributors? What were the recent security vulnerabilities that it had and how long did it take them to kind of respond and fix them? There's a bunch of stuff that you could find out within the range of being more aware of the security of packages that you use. But essentially, it's how healthy the package is, how maintained is it? Those are kind of the things that I think are important to essentially consider, as well as see if there are any already existing security controls within the project itself. If you're looking specifically interested, if this project is going to be a security liability, then maybe you should check if the project has a security .txt or security readme kind of thing, where they say, "Hey, this is our responsible disclosure policy, and this is the time it will take us. This is how you submit." Are they aware of those kinds of reasons? Do they have DevSecOps kind of pipelines, where they add Snyk or something else to their pipeline to check for new security issues that come up from their own code, or from third-party vulnerabilities. There's like a bunch of things that you could definitely have as pre-checks to make sure that maintainers have really evaluated and think highly of security for their projects.

[00:26:13] SIMON MAPLE: I highly agree with everything that you just said. I think one of the core pieces there as well, I think is when you talk about pre-checks. It's so important to automate so much of that, just because you're going to come up against so many different ranges of care and responsibility across developers, not just development teams, but developers, individual developers that will do things very differently. Having that automation, that consistency across your organisation is one of the hardest things as well. I guess you talk a little bit about there about controlling the blast radius. I think this is very interesting, because we have to always assume that there is the chance of something malicious entering our production environment. And it's really important to be able to defend against that. Do you have any recommendations or any thoughts on some of the best ways of defending in kind of like a great depth and more about controlling that blast radius, if something happens?

[00:27:07] LIRAN TAL: I mean, some of that might be very specific to some ecosystems, just because the way that they work. But I would say, generally like, as we said before, like avoid blind updates, like a lot of developers just blindly update to like the latest versions. So at this point, like with the tooling that we have in the ecosystem, you shouldn't do that, there are enough automated systems, bots, Snyk included, as well as others to basically go ahead and allow you to have an automated dependency upgrade procedure with a pull request, you can review it. There's no need to rush and blindly upgrade the latest – that's not something I would recommend to anyone, unless there are very specific use cases. 

The other thing is related to getting dependency updates is do not allow anyone to essentially update your dependency manifests. That is because there's been a security research on this called lock file injection or lock file tampering. Which means that, essentially, I can update the lock file in a way that will actually download my own controlled malicious package, even though the dependency manifest itself says a different name, but I really just changed the origin, the source of where this tar ball or whatever it gets downloaded. You would probably miss that because lock files are generally very computer-generated, heavy JSON or whatever kind of format. You don't really review it. Because we have those dependency updates, bots, you shouldn't also receive humans, kind of updating for your project, whether internally or externally, probably, those kinds of manual updates. In that case, solutions or ways to essentially be up-to-date with the security concerns or some package managers like NPM, like Ruby's RubyGems. They actually allow any person down the tree of transitive dependencies to execute whatever commands they want. 

One easy way of like limiting that blast radius is when you install packages, there's like a flag called ignore script or something, you just have to set up the true. It's not completely 100% working for all use cases. The exclusion is if you install files locally from like a tar file, or from a git repository, but most of the cases are not that. People usually just install them from the net. If you do that, then it disables any way of those malicious packages to just control your user when you install it, and run whatever commands they want. This another really good way to just wave off a ton of malicious packages. There are other specific cases that would really help you, like avoiding dependency confusion. So like dependency confusion is like its own entire issue of managing dependencies that relates specifically to enterprises and the way that they manage like internal proxies and registries of private packages the way that they host them.

It's actually not a simple attack as typosquatting. It's simple to execute, but the way that it happens is due to a bunch of human error-like configuration issues, and the way that proxies are configured, and the default behaviour of package managers. That one is actually cross-cutting across .net and a bunch of other package managers that are potentially vulnerable today. This has been a huge part two.

[00:30:31] SIMON MAPLE: Okay. Thank you very much, Liran. That's absolutely great advice. And yeah, hopefully, there's a lot of little things that people can do right across the board to really improve their general posture and make their organisations a little bit safer from malicious software. Thank you very much, Liran. It's been great to hear your insights, and thank you for taking the time to chat with us today on The Secure Developer.

[00:30:51] LIRAN TAL: It's been a pleasure, Simon. Thanks for having me.

[00:30:53] SIMON MAPLE: Thank you all for listening on The Secure Developer.

[END OF INTERVIEW]

[00:30:59] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]

Up next

Building Open Source Communities With Rishiraj Sharma

Episode 124

Building Open Source Communities With Rishiraj Sharma

View episode
What Is Software Supply Chain Security And Why It's Important

Episode 126

What Is Software Supply Chain Security And Why It's Important

View episode
Software Supply Chain Security - Key Terms, Players, And Projects You Need To Know About

Episode 127

Software Supply Chain Security - Key Terms, Players, And Projects You Need To Know About

View episode
Tackling Software Supply Chain Security As An Organization

Episode 128

Tackling Software Supply Chain Security As An Organization

View episode