Skip to main content

Understanding Responsible Disclosures

著者:
Tim Kadlec
Tim Kadlec

2017年1月31日

0 分で読めます

Addressing security vulnerabilities is a constant battle. It’s a race between attackers and the organizations trying to keep them out. Unfortunately, the organizations lose frequently. As former FBI director, Robert Mueller put it, “…there are only two types of companies: those that have been hacked and those that will be.”

To defend your system, you need to be able to lock down the entire application. An attacker, on the other hand, needs only to find a single opening. Organizations need a head start, and how they learn about security vulnerabilities in their code plays a significant role in that.

At first glance, the issue seems straightforward. If you find a security issue, contact the organization and let them know. They fix it, and we all get to move on, happy and more secure. The reality, however, is the topic of disclosing vulnerabilities has long been one of the most widely debated topics in security.

Minimizing window of exposure

Bruce Schneier popularized the idea of the “window of exposure”: the window of time where a vulnerability is at risk of being exploited.

The window of exposure begins the moment a vulnerability enters a production environment. The risk of attack at this point is relatively low. The vulnerability exists but has not yet been discovered by anyone.

As soon as someone discovers the vulnerability, the risk increases. From there, the risk continues to grow as the vulnerability becomes more well-known. At some point, a patch or upgrade gets released. From here the risk slowly decreases as users start to install the fix — something that rarely happens quickly.

Schneier likes to use a graph to visualize the amount of risk, with the area under the graph representing the window of exposure. The goal, then, is to reduce that window of exposure.

Window-of-Exposure

How that is best accomplished is what the debate revolves around.

If a vulnerability falls in a forest and no one hacks it, did it exist?

One argument is that if you discover a vulnerability, you should report it to the organization and never go public with it. This approach, the argument goes, helps to keep the vulnerability safely in the unpopular phase of its lifecycle, minimizing the risk.

This line of thinking has a few flaws. For one, it makes the dangerous assumption that the good actor is the only one who found the issue. If bad actors have also identified the issue, they’re not going to sit around and wait for the organization to figure out it’s a problem; they’re going to take advantage of the fact they have access to an unpatched vulnerability.

The other flaw in this approach is that it assumes the company cares enough about the vulnerability to do something about it. The reality is that there is a problem of incentive. Early on, this was the most common way vulnerabilities were disclosed and in many cases it resulted in organizations that took years to patch a vulnerability, choosing to believe that because an issue was not publicized, it somehow granted them some level of security. Ignorance as a layer of security, essentially.

In some cases, fearing the bad publicity that can come with some security issues, the organizations would even threaten the researcher who tried to report the issue. Sadly, this still happens from time to time which is why some security researchers prefer to disclose to an organization or individual who will then disclose to the company. It’s something we are willing to do, and have done, here at Snyk. This helps protect the researcher themselves from an overly aggressive owner or organization.

Full public disclosure

On the exact opposite end of the spectrum is the idea of full public disclosure. Instead of reporting to an organization, a researcher could go public with the disclosure in its entirety. The vulnerability skips right past the slow ramp up of popularity and jumps to peak awareness. From here, it’s a race between the organization and the attackers.

This approach does create some obvious incentive for the organization to address the issue quickly. The flaw with this approach is relatively obvious, though: the same publicity that results in this incentive also puts organizations—and their users—at significant risk as attackers are immediately aware of the exploit and how to attack it.

Reasonable disclosures

There’s a point in the middle somewhere that the majority of the industry has settled on, called responsible disclosure. Responsible disclosure involves a few basic steps.

  1. The vulnerability is privately disclosed to the owner or organization.

  2. A fix for the vulnerability is created, typically by the owner or organization (though the reporter often assists).

  3. The fix is published and rolled out to users.

  4. The vulnerability is publically disclosed. This disclosure includes information about the vulnerability, how the exploit is conducted and information about how to fix the issue.

This may sound similar to the idea of keeping vulnerabilities a secret, but there’s one important difference: a responsible disclosure typically operates within some sort of reasonable time limit.

For example, here at Snyk we initially give the owner of a package 30 days to respond to our disclosure—a fairly standard duration for open-source development where authors often have a different day job. In some cases, the owner responds and at that point, we can help point them in the direction of a fix and work with them to find out what is a reasonable time to publicly disclose the vulnerability so that people can take steps to protect themselves.

In a perfect world, this is how every vulnerability disclosure would proceed. The reality is a little different, though, and sometimes the owner is non-responsive. Following responsible disclosure means that if an organization does not respond within the specified time limit, they researcher can choose to go public with the vulnerability. This lets users become aware of the risk so that they can decide if it’s one they want to carry in their own systems. It also provides an incentive for owners or organizations to prioritize addressing the issue since it is now out in the open.

In the case of Snyk, we do our best to ensure owners or organizations have a head start. If we do not hear from them within 30 days, we re-contact them and give them 10 more days. If we still do not hear from them, we repeat this one more time. All-in-all, we give them 50 business days to respond. Only if there is no response during that time or the owner indicates they don’t want to coordinate the disclosure, do we go public.

There is one notable exception in this process. Our enterprise users get early notifications about the vulnerability under a non-disclosure agreement (NDA) that ensures the vulnerability will not yet be made public.

The ethics of disclosure

In 2016 a cybersecurity firm discovered vulnerabilities in St. Jude Medical’s equipment (pacemakers, defibrillators, etc.). Instead of going through a responsible disclosure process, the firm released incomplete data about the vulnerabilities and then proceeded to partner with another organization to short-sell St. Jude Medical.

It brought the responsible disclosure debate back in full-force. The CEO of the firm argued they didn’t think that would be a practical approach given their prior experiences with St. Jude.

Even if we set aside the questionable ethics of short-selling after publishing this incomplete data, the damage of the approach is significant. Incomplete data is enough to hurt St. Jude’s reputation, without helping them get any closer to the issues at hand. Instead, it puts St. Jude and attackers on an even playing field, with both now aware an issue exists and both racing against each other to see who discovers it first. Whether St. Jude would have responded within a reasonable amount of time, we’ll never know—St. Jude never even had that option.

Security is an interesting field in that to do it well requires you have a certain level of distrust in the people who will be accessing your servers and applications. This makes it all the more critical that we act in a trustworthy and ethical way when we work to improve the state of security online.

It is critical to disclose vulnerabilities in a way that is ethical and responsible, inflicting as little damage to the organization or owner as possible while still protecting users. We firmly believe that a responsible disclosure process provides the right balance. It gives organizations a head start where they can privately be made aware of a vulnerability and have time to address it before the window of risk gets too large.

And it does this without putting users at risk. If an organization fails to prioritize a vulnerability or does not respond, the vulnerability does not get swept under the rug — it can be publicly disclosed so that unsuspecting users can be made aware of the issue and take appropriate steps.

Security is too important to sweep under a rug, and there’s far too much to be learned from vulnerabilities to allow this to happen. But organizations do deserve the chance to address these vulnerabilities before they’re made public. Security is challenging enough without giving attackers an unnecessary head start.

If you would like to learn more about how we handle vulnerability disclosures at Snyk, our entire policy is available online. If you discover a vulnerability that you would like us to help disclose, we’re happy to help with that as well.

カテゴリー: