Skip to main content

Open source security with O’Reilly author Guy Podjarny

Artikel von:
Hayley Denbraver

Hayley Denbraver

wordpress-sync/Blog-feature

30. August 2019

0 Min. Lesezeit

Last week, Snyk Co-founder Guy Podjarny sat  for a live chat to discuss his O’Reilly book Securing Open Source Libraries. This post summarizes a few of the interesting takeaways from the webinar; you can also check out the recording here if you haven’t had a chance to listen in yet. Snyk is also happy to make a copy of the book available to you for free.

Why open source security?

The discussion began with a very important question: why do we want to think about open source security? Guy discusses how he believes that the risk associated with open source libraries is underestimated in the market today. The vast majority of the code that a development team deploys is not actually code that they wrote, but rather open source libraries. This is actually great news because it means that teams are not reinventing the wheel, but it does mean that you inherit the risk that is present in open source components.

In part, this risk is a reflection of the volume of open source libraries compared to your original application code. The risk also derives from the fact that open source components are particularly compelling targets for attack. Attackers often go for low-hanging fruit first and open source provides a high return on investment because one vulnerability can be exploited against many victims.

How is the industry currently approaching this problem?

Part of the industry isn’t currently addressing this problem. People might hear about a particularly malicious exploit and address it on a one-off basis. But they don't have an open source bill of materials. They're not tracking which components they're using where. Nor are they really monitoring these components against the vulnerability database.

Others take security into consideration when making decisions about which open source libraries to use. This often happens before a library is brought in, but the library is not necessarily monitored as time progresses. New vulnerabilities may be found or an upgraded version may introduce new vulnerabilities, but there is no process in place to monitor these changing concerns.

Finally, some companies are investing in finding and fixing open source security vulnerabilities on an ongoing basis. Much of the content of the book explains how best to put this into practice. So let’s start by talking about what happens when a new vulnerability is disclosed in an open source library.

The race against time

When we consider a project, we should assume that it has bugs. We do not write perfect code, nor do we write or use code under optimal conditions. This is true for security bugs as well. It is therefore healthy to assume that any open source project you are bringing in has a security vulnerability in it. The community may just not have found it yet.

When a vulnerability is found and disclosed, a race begins. The community is racing to release a fix for the vulnerability and to get people to apply it. Malicious actors are racing to exploit the vulnerability wherever they can. The vulnerability is known, so malicious actors don’t even have to find it; they just need to mobilize. As soon as the vulnerability gets disclosed, the security risk associated with it rises substantially. Attackers rely on the lack of security hygiene. Teams want to minimize the time between when a vulnerability is disclosed and when it is remediated. You're never going to close that gap entirely, but there's still a difference between an hour and a day and a week and a month and a year.

So how can teams close this gap? And what roles do different people play?

DevSecOps in real life

DevSecOps is an aspirational term that we hear a lot in the industry. It is really about working together across disciplines towards a common goal--a functional and secure product. The steps individual contributors take towards this goal will look different depending on whether they are security or development.

The security person's job is to keep their organization secure by consciously and purposely managing the potential risks. So a security person should have an understanding of their current risk posture and be able to prioritize what risks to tackle first. However, if a security professional is expected to be the person who directly does remediation, that is not going to scale and could cause problems because such a person does not know the codebase nearly as well as the development team.

Security professionals are to DevSecOps as System Reliability Engineers (SREs) are to DevOps. The security professional has a high level view of the overall health of the system and sets policies. In addition to governance responsibilities, the security members of the team are going to educate and empower the developers with whom they work to take ownership of their day-to-day work. The work they do, anything from governance to automation, enables developers to make the right security call and act on it.

If the workflow is set up well, most of that work can happen without security intervening. If a security professional has done good work setting up policies and giving the developers good tools and training, the daily job can continue with little to no interference from security.

Fixing is the goal

And what is the goal that a DevSecOps team is working towards? Fixing vulnerabilities is the goal.

Knowing that you have vulnerabilities and knowing where they are is useful information, but we want to work towards healthier systems overall, and that means remediation. developer doesn’t have the tools that they need and if they don’t have the proper support from security, or management, then sometimes we end at finding the vulnerability instead of fixing it.

So how do we maintain our momentum and not only find, but also fix an issue? In many organizations fixing doesn't happen until triaging happens and triaging happens before development is engaged. Triaging means that we review the vulnerability’s severity and risk, but not necessarily how easy it is to fix. Once triaging is complete, the development team finds that some of these issues can be fixed trivially, while others are really systemic and hard to fix.

For a number of vulnerabilities it will be easier to fix than to triage. Triaging is necessary when fixing isn’t trivial, but if a fix is easy, cut to the chase and fix it. If you have invested in making remediation easy, then you can fix a lot of these vulnerabilities without ever triaging them.

Another obstacle to fixing vulnerabilities is scale. Teams generally use a lot of components and many of them are vulnerable, so it can be hard to address vulnerabilities at scale. A good governance plan can help in this situation, because it makes it easy to determine when the team needs to drop everything and address an emerging issue and when vulnerability remediation can be incorporated into the team’s typical schedule.

Guy summarizes the goal: You want to find and fix problems that are already in your project, and then prevent and respond to future issues. Find. Fix. Prevent. Respond. It boils down to four things, at scale.

Stop the bleeding

So let’s get started. But what do we approach first?

Triaging is a word that is often associated with emergency medicine. Imagine a patient is being seen because they had been in a bad car accident. The patient may have a number of things wrong with them--maybe they have had a cold for a week, some seasonal allergies, or even a more serious chronic illness. All of these ailments are worth addressing with a doctor, but none of them matter in the first moments after the patient is brought in after a crash. What matters is to stop the bleeding. Doctors do what they can to prevent such a patient from getting any worse and once they are stabilized, other issues can be addressed.

When a team is getting started at addressing open source security, it can be very overwhelming. Your project may have a number of issues from the start. In this case, it is good to remember that you want to stop the bleeding first. You can address current problems after you prevent things from getting any worse. Work on the delta. If you know that your project has seven vulnerabilities, you open a PR with new work and then it comes back with eight vulnerabilities, you only need to prevent or fix that additional vulnerability in order to stabilize your project. Fix the security issues that show up in the delta between old and new code. Once this workflow is in place, you can set about addressing legacy security concerns.

Stop the bleeding, but don’t stop at the bleeding. Prevent and respond. Find and fix. Address ongoing open source security concerns and work to keep new problems out. Use open source confidently, responsibly, and securely.

Watch the full interview here.

Get your free copy of Securing Open Source Libraries

wordpress-sync/Blog-feature

Sie möchten Snyk in Aktion erleben?

Snyk interviewed 20+ security leaders who have successfully and unsuccessfully built security champions programs. Check out this playbook to learn how to run an effective developer-focused security champions program.