February 26, 20190 mins read
Welcome to Snyk's annual State of Open Source Security report 2019.This report is split into several posts:
Open source maintainers want to be secure, but 70% lack skills
Or download our lovely handcrafted pdf report which contains all of this information and more in one place.
Most developers and maintainers will likely agree that security should play an important role when building products and writing code. However there are no text-book rules for maintainers to follow for building open source projects, and as such their security standards can vary significantly.
Maintainers find themselves using their time and efforts on different aspects of the project, often functional, which in turn, could make security less of a priority for them in their process.
There’s a positive trend towards security engagement and awareness since the time of our previous report, released in 2017.
Open source maintainers stated their security knowledge is improving but not high enough, averaging 6.6/10
This year, the majority of users ranked their security know-how as medium, with an average of 6.6 out of ten. A small portion of them (7%) ranked themselves as low, whereas the medium know-how ranking, representing the majority of users, has actually declined to 63% vs 56% last year.
The most movement is seen with the low and high rankings. Last year, security know-how was ranked as high by only 17%, while this year it has increased to almost 30%. In addition, we can see similar growth in low-ranked security know-how, which reached 26% last year but only 7% this year.
70% of open source maintainers stated they don't have a strong security knowledge
A security audit could exist as part of a code review where peers ensure that secure code best practices are followed, or by running different variations of security audits such as static or dynamic application security testing. Whether manual or automatic audits, they are all a vital part of detecting and reducing vulnerabilities in your application, and should be executed as regularly and early in the development phase as possible in order to reduce risks of exposure and data breaches at a later stage.
One in four open source maintainers do not audit their code bases
Last year 44% of respondents stated they had never run a security audit, while this year, the number is considerably lower with 26% of users stating they do not audit their source code. We’re seeing positive trends toward repeated auditing actions this year across all audit cycles as compared to last year’s report with an increase of an average 10% of users auditing their source code more often over the quarterly and yearly cycles.
Security professionals often cite the shift-left mantra in support of handling security concerns and potential problems earlier in the application lifecycle. This approach can uncover many valuable insights for developers through automation and help security keep up with the fast pace of modern, continuous development.
Shifting left, especially in security, is key and at times even critical, to reducing the cost of security incidents that are only found in production. One way to address security earlier in the process and to increase the chances of developers adopting those practices is to select tools that are developer friendly and built to integrate with their existing workflows.
It is more likely that maintainers be alerted to a security concern than it is that they find out themselves. An industry-accepted best practice is a responsible disclosure policy, which details how security researchers and individuals should safely report security vulnerabilities to project maintainers.
From the survey data, we can conclude that almost half (48%) of respondents find out about a security vulnerability that is in their code from a public channel, such as when someone else is opening a public issue or contacting them over email.
72% of users said they find out about vulnerabilities in their code when they review their own code personally; however 62% of users have stated they have only medium-level security know-how whereas only 30% of them state their security expertise is high.
Furthermore, while the majority of users (72%) say they review their own code to find vulnerabilities, 48% of users still learn about vulnerabilities in their code only when someone else opens a public issue, demonstrating how hard it is to rely on just one maintainer reviewing code even if that maintainer is perceived to have good security knowledge.
One of the research questions we wanted to answer was how long it takes for a vulnerability to enter the code base until it is discovered and disclosed? To answer this, we set out to analyze several top libraries in the npm ecosystem and the vulnerabilities that were discovered in them during 2018.
As this is more time-consuming and tricky to accurately automate, we looked at the top six npm libraries and analysed their code bases to see the differences between the dates of the commits that introduced the vulnerability and fixed the vulnerability. Of course, these calculations are slightly biased because we’re using such a small sample size, but the range and order of numbers are interesting all the same!
Of these seven libraries, we saw that the quickest time-to-fix from inclusion was almost one year, or 289 days to be precise. The median time is almost 2.5 years, and the worst case we saw was 5.9 years.
A recent report released by the US government deemed the infamous Equifax breach as completely preventable, and demonstrated how important it is to shift security to the left by integrating it into the development workflow.
With a DevSecOps mindset and good practices employed, a development team could have prevented the Struts vulnerability making such an impact if:
developers would have found the issue by adopting open source dependency scanning tools that integrate with their workflow using IDE plugins or code linters.
any new build run by a CI server would automatically test application dependencies via a CI server plugin or a CLI invocation as a task. This would immediately flag the new vulnerability, breaking the CI job and forcing a remediation action before continuing.
a monitoring solution was in place that notified developers of the new vulnerability in their dependencies.
Further monitoring and runtime insights into how the application behaves and what vulnerable functions it invokes could have alerted of vulnerabilities in the Struts library.
A crucial part of a responsible security disclosure is the speed of fix and roll out. It’s important to be able to address the vulnerable issue as quickly as possible, thereby reducing the time it exists in the code, and also to provide sufficient time for users to upgrade to a fixed version, preferably before the issue is common knowledge.
As the nature of open source communities revolves around mostly volunteer work of developers (a BIG thank you to all the wonderful people who contribute to open source software – your kind work is very much appreciated and rarely acknowledged or appreciated publicly!), it is interesting to gauge how fast maintainers of open source software can react to a security vulnerability and provide a fix.
An overwhelming majority of users, totaling 84%, state they are likely to respond with a fix in less than a week. 56% are likely to address it within a day, while 22% state they can address a security issue within a few hours after the vulnerability has been reported – not all heroes wear capes!
A significant benefit of having a responsible disclosure policy is to keep users out of harm’s way. When a vulnerability is reported and triaged in a confidential manner with the project maintainer it allows the maintainer to prepare a fix before the information is disclosed to the general public. If maintainers can act quickly and release a fix, then they provide a window of time during which their users can upgrade to the fixed version. This time window significantly decreases the number of users that consume the vulnerable versions.
We believe that having a responsible disclosure policy in place will also communicate the maintainer’s high commitment to security. We recommend to use a badge on the project’s homepage, and including a SECURITY.MD policy file in the project’s repository as a good practice.
In the last report we found that maintainers who have a public-facing disclosure policy in place are far more likely to receive disclosures from users in confidence, than those who do not.
About 21% of maintainers with no public disclosure policy have been notified privately about a vulnerability, as compared to 73% of maintainers with a disclosure policy in place.
Websites are susceptible to web security vulnerabilities and would benefit from clear guidelines about web security policies. An emerging proposal to aid with this is the SECURITY.TXT (RFC 5785) which has seen early adoption already. The purpose of such as policy file is to effectively communicate to security researchers the relevant contacts, preferred languages, exact policy and ways of communication, including public keys to securely and efficiently disclose security vulnerability.
Open source maintainers want to be secure, but 70% lack skills