November 8, 20190 mins read
This is the final part of a four part series about building your Kubernetes AppSec strategy.
The previous parts are available here:
One topic we haven’t touched on in our discussion of application security for Kubernetes is the differences between first party and third party applications.
From the point of view of your production systems, there really isn’t much of a difference between the two. Whether you wrote the application or someone else did, there are many similarities. Either way the app has some access to production data and vulnerabilities in the application could result in some level of compromise of that data or associated systems. From a classic operations perspective it’s easy to treat first and third party apps the same. But what about as we shift some of the operations responsibilities into development teams?
In Kubernetes, it’s common to install third party applications using a tool like Helm. The Helm Charts repository contains configuration for hundreds of applications, from aerospike to zetcd, which is a huge time saver. Software vendors distributing software for consumers are also increasingly packaging those applications using Helm. A Helm chart contains the required configuration for installing the application, which in turn usually references a set of third party images. More recently we see work on Cloud Native Application Bundles (CNAB) following a similar pattern.
Let’s take a quick look at a sample Helm chart, in this case for Linkerd.
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── config.yaml
│ ├── daemonset.yaml
│ ├── ingress.yaml
│ └── service.yaml
1 directory, 9 files.
What are some of the contents of the chart that might impact security?In the Values.yaml file we find details of a few container images, which may contain vulnerabilities:
The values file also sets sensible resource limits
The templates, in particular the daemonset, contain a container spec which can have various security properties configured, including whether containers run as root, have a read-only filesystem, only request the capabilities they need, set a pod security policy, etc.
Third party applications throughout the SDLC
How do these packaging considerations that map to our SDLC, and how we enforce security throughout the lifecycle?
Local - if you’re using third party charts then none of your development teams may be working locally on them.
CI/CD - depending on how you are installing the charts you may have a reference to them in some configuration file (like Helmfile or Terraform for instance) but you most likely don’t have testing in place there except potentially some smoke tests.
Registries - Helm charts typically refer to images in public repositories. It’s generally possible to override those and using internal images, but you will need a process for keeping your internal copies of those images up-to-date.
For a standalone application, it’s very possible that the earliest stage when you can check for vulnerabilities today might be when you deploy it into production. This means we have a disconnect between the reality of pushing decisions about third party content down to developers, and the capabilities of our tools to provide tight feedback about whether those third party applications are safe to use.
Types of third party applications
Generalizing a little, we can classify third party applications into two groups:
Standalone applications that provide some specific discrete value, like Wordpress or Jenkins.
Direct dependencies of some first party application, for instance Redis or PostgreSQL.
When we look at how third party applications are introduced into our environments we’ll see why this distinction is useful.Standalone applications are often more of a concern for operations or platform teams. They are installed and managed by a central team who provide that application as a service to other development teams. Direct dependencies are more often in the hands of the individual development teams. When thinking about solving third party supply chain security we need to consider things from both of these perspectives.
What can we do about the problem? The answer is to push for more automation, and to design pipelines that make it easy to validate and test third party content earlier in the pipeline. This pipeline should work for standalone applications (which might have no existing pipeline) and for testing dependencies of existing applications.
Solving this requires us to think about the full software supply chain. We need local tools that can help us sanity check a Helm Chart, Operator or similar bundle of configuration and image references. We need CI/CD pipelines that we can quickly standup on demand to ensure as third party content changes we understand how those changes the risk of using it. We need streamlines pipelines for bringing external images into our trust. It would be nice to see standards emerge for sharing trusted vulnerability data.
It is possible to safely use third party application in an environment that is rapidly shifting responsibilities to developers. But it means considering the process for doing so, and then embedding as much of that process into the Secure SDLC.