Security in context: When is a CVE not a CVE?

Written by:
Matt Jarvis
Matt Jarvis
Asaf Biton
Asaf Biton
wordpress-sync/blog-feature-code-vulnerability-warning

December 17, 2021

0 mins read

At Snyk we have some general points of principle that we use to help guide our security thinking and decision making.

Firstly, it is always important to understand from whom we are protecting, as it has implications for how we need to act. As an example of this, if our artefact is a web server, then we need to protect it against untrusted users. Whilst if our artefact is encryption software, then we clearly need to protect it even from users with physical access to the system. In each of these cases, we clearly distinguish what that risk boundary is, and where we need to focus our attention.

Secondly, configuration is part of your codebase. If a malicious actor has access to your configuration, then it’s fundamentally the same as if that malicious actor has access to your codebase.

In light of the tsunami of vulnerable systems caused by the recent Log4j 2.x vulnerability (Log4Shell), as a community, we might be forgiven for searching ever more closely for other potential vulnerabilities. However, perhaps we ought also to take this opportunity to pause and reflect before rushing to judgment about every potential security issue we find in our code. Calmer heads might also suggest that everything that causes security issues shouldn’t necessarily be classified as equal.

As an example of this, we could potentially look at the recent CVE-2021-4104 assignment for Log4j 1.x through this lens. Exploiting this would require direct access to the configuration files in order to manipulate settings. There are now similar threads emerging around the Logback project, as well as examples in the Node community.

Let’s be clear that exposing potential vulnerabilities is still a Good Thing. But in the midst of media panic, it’s also appropriate to explore our own critical thinking, and to re-establish our community baselines on what we should be focusing on. Creating another tsunami of potential vulnerabilities can do more harm than good, further overwhelming already stressed out security teams, and muddying the waters to hamper the ongoing industry efforts to secure open source software.

Thinking critically about CVEs

The discussions identified in the threads above raise some very interesting questions.

For example, if a vulnerability requires privileged access to files inside a system in order to be exploited, then is that really a vulnerability? Almost any piece of non-trivial modern software could be configured to allow for insecure operation. An analogy to this could be that it’s perfectly possible for sshd to be configured to allow for login with empty passwords. Should we consider that to be a novel vulnerability in sshd?

One of the mechanisms that our security team uses is to consider the idea of expected vs unexpected behavior. If a piece of software can be configured to execute remote code and this feature is well documented, is it a behavior that provides a weakness that could be exploited? The argument could be made that this is expected behavior, and not actually a security vulnerability per se.

Should we design software to have no configurable modes which are insecure? This could be perceived as a laudable goal, but it somewhat conflicts with how we have designed open source software for the last 30 years or more. Typical design goals for open source software have been to aim for maximum levels of configurability, to allow for all use cases. Sensible security defaults have become a secondary goal over recent years, but the principle has always been to empower the user with options to configure the software in any way they see fit, secure or not.

We currently also live in a world where CVE’s are both easy to raise, by individuals and companies, and valuable in the sense that there is credibility and cache attached to them. This unique combination can plausibly result in questionable assignments. On the other hand, having frictionless capabilities to report potential security issues cannot be anything other than a positive thing, and so we are presented with somewhat of a paradox.

As an industry, we are getting much better at identifying vulnerabilities, whilst at the same time creating vast amounts more software, and so it’s easy to see the potential for overload. 

Correctly identifying vulnerabilities in software, assessing them, and providing mitigation is a deeply resource-intensive process, and one that is predominately manual, particularly for complex cases. Whilst ML approaches are becoming more useful, and AI may yet prove even more worthwhile, in many cases (ironically) computers are not that useful to us in diagnosis.

Editor's note (19 Dec 2021): Since publication, the Logback project has assigned CVE-2021-42550 to the issue linked to above. Based on the rationale we outlined in this post, Snyk is not going to adding an advisory for this issue at this time. We’ll continue to engage with the Logback community and hope that a consensus can be reached through open discussion.

Looking ahead

The discussion points outlined here aren’t aimed at changing any specific CVE, more to open a conversation about the future of vulnerability assessment, and what we consider to be insecurities now and going forward. Building powerful and versatile open source software that is fit for generalised use cases inevitably produces configurable modes which provide more or less security. If we want to continue to do that, then there is also a responsibility on the part of the user to be aware of the pitfalls in use. Perhaps the answer is better documentation and education as much as classifying potential vulnerabilities in normal use cases.

We’d love to hear the thoughts of the community here, reach out to us on social (@snyksec) and let’s discuss.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo