If you’ve ever scanned a container image for vulnerabilities, you’ve likely found more than a few issues. Up to hundreds, even thousands. This new guide “Container Security For Development Teams,” co-authored by Snyk and Docker, focuses on the container image and the software packaged up inside.
It begins with a look at why container security is important. Containers are increasingly popular, but do present security risks that can potentially expose business to millions of dollars in fines, lost productivity, reduced sales.
Part two outlines a clear, 3-step process to create a secure container image. Want to get started on this? Jump right to this section and sign up for a free Snyk account to easily find and fix vulnerabilities in Docker images and open source libraries.
Container security can be defined as the holistic application of processes, tools, and other relevant resources, with the aim of providing strong information security for any container-based system or workload.
There are a broad set of considerations under the banner of container security because containers include both the container image as well as the running container, plus all the linkages that are required to go from creating that image to getting it running somewhere.
One analogy is to compare container security to flight safety: in order to feel safe and secure on an airline flight, we want to know that the airplane we’re sitting on has been well- constructed, following all required safety regulations. But we also want to know that there are in-flight systems that keep us safe as we cruise miles above the earth at hundreds of miles per hour. And having only one of these is not sufficient: we really want both of those guardrails in place to feel secure as we travel.
Container security is very much the same. The container image is our airplane: it needs to be well-constructed following all the security guidelines and eliminating any potential issues. After all, these are the components that are eventually going to be running your application. If there are security problems packed into the container image as it’s built, you increase the risk and potential severity of issues that will happen in production. To that end, you want to monitor production as well. You can create images with no vulnerabilities and no elevated privileges, but that does not absolve you of monitoring the things going on in production.
In this guide, we’re going to focus most of our attention on container image security – building the airplane in our analogy. Containers combine operating system elements with application code; typically there is also a bit of code called a Dockerfile that provides the instructions on how a particular container image gets built. As containers are often defined and built by developers and DevOps teams, the maintenance of operating system components and packages, previously managed by dedicated system administrators and virtual machine managers, is shifted to developer and DevOps teams.
This shift in responsibility, combined with the fact that containers can be updated and deployed in a matter of seconds, requires a new security methodology. There are any number of container security guides available that will tell you to “scan your containers in CI” and “follow container image best practices,” but few that get into the details of what to do when your scan results show hundreds of vulnerabilities.
Merely providing developers with a long list of detected vulnerabilities is not adequate for fixing issues inside containers.
On the other hand, most organizations do not have the security headcount to be able to scale to meet the demands of faster deployments. So in this guide we will show how DevSecOps practices can be used, together with the right tools, to make container image security actionable and efficient for developers.
Why is container security important?
Container security is important for the same reason traditional infrastructure security is important: a security compromise puts customer data at risk and potentially exposes a business to millions of dollars in fines, lost productivity, reduced sales, and potential dissolution.
Find and fix vulnerabilities in containers
Easily find and fix vulnerabilities in containers and Kubernetes applications
Docker popularized and simplified the use of container technologies and made it simple for developers all over the world to publish their container images on Docker Hub. In the past several years, the use of containers has exploded and the number of container images that are publicly available has grown exponentially. On Docker Hub alone there are more than 7 million repositories, from which more than 11 billion images were pulled in the last month! As more organizations shift to a container-first development and operations model, these numbers will continue to grow.
With so many images to choose from, selecting a good starting point to build upon seems like it would be easy. The nature of containers makes it possible to take any existing container image and add your own software and tools to create a new, unique image. Unfortunately, just picking any image is not a good practice. Some images you find may be old, which opens the door to long-exposed vulnerabilities. Some images are poorly constructed, not due to ill intent but simply due to poor practices. Other images can be malicious, created by bad actors to look like something useful while harboring crypto-miners and backdoors.
As you and your teams build upon the parent images you choose, you need to make sure you’re keeping up-to-date with the latest versions of those images plus maintain all the packages and code you add to the containers as well. This is the responsibility of developers and DevSecOps teams and we will take a look at not just the technical tooling that will help, but some DevSecOps practices that organizations are using to “build a safer airplane.”
Three steps to creating secure container images
As we noted earlier, container security is not a single area of concern – it spans developers, security, and operations teams. There are an array of security layers that apply to containers:
the container image itself and the software inside,
the interaction between a container, the host operating system, and other containers on the same host
the host operating system itself,
container networking and storage concerns,
and security at runtime, often in Kubernetes clusters.
Each of these bullets deserves a guide of its own to do it justice, and all but the first bullet do have one or more guides available. This guide focuses on the container image and the software packaged up inside.
At a high level there are three key steps to creating a secure container image:
Secure your code and its dependencies
Build up with a minimal base image from a trusted source
Manage the tools and packages you add to images throughout the development lifecycle
We will consider each of these in a bit more detail to see how this approach can create secure container images.
1. Secure your code and its dependencies
Delivering your applications faster is likely one of the key reasons you’re creating a container in the first place and is the lifeblood of your organization. In the not-too-distant past, application security began and ended with the code and while containers and other modern development practices have expanded the broad meaning of “application code,” this particular area of concern still remains.
Fortunately, this is the portion of container images that’s most directly controlled by developers and, hopefully, the best understood. Nonetheless, tracking down all your code dependencies and figuring out how to fix security issues isn’t trivial. Assuming you have access to the source code itself, you should use purpose-built tools – like Snyk Open Source – to do software composition analysis (SCA) and static application security testing (SAST) to analyze your code and its dependencies. In modern applications, it’s not unusual for the 3rd party open source dependencies to make up the majority of the lines of code in an application.
Spotting issues early in development and integrating tools with your source code opens up the possibility to automate this process, independent of the containerization process. It is possible to scan a container and analyze some types of code, however, catching these issues directly in your git commits and repositories will likely fit a developer’s process better.
2. Start with a minimal base image from a trusted source
What’s the big deal about small images?
The base image – the FROM line in your Dockerfile – is one of the most important considerations when it comes to security. Fortunately, many trustworthy vendors provide content you can easily use. Docker Hub is by far the most popular starting point for sourcing container base images.
Docker Hub has more than 3.8 million available images and more than 7 million repositories. Docker Hub is very active and sees about 11 billion pulls per month. Some of these images are Official Images, which are published by Docker as a curated set of Docker open source and “drop-in” solution repositories.
Docker also offers images that are published by Verified Publishers. These high-quality images are published and maintained directly by a commercial entity whom Docker verifies as a Verified Publisher. Docker’s guidelines for these verified publishers to follow are a great starting point for defining your own internal container image best practices.
It’s easy to go to Docker Hub and find a publicly available image that matches your use case, but you need to pay attention to the provenance of the images you choose. Just like you wouldn’t download and install software from an untrusted website, you likely would not want to use images pushed to Hub by users you don’t know and trust.
By using images that are part of Docker’s Official program, or if you know and can verify the source and contents of 3rd party images – perhaps using something like Notary to check digital signatures – then you have some level of assurance of quality. But to further reduce the number of vulnerabilities and add more control to what is packaged inside your containers you should go a step further and choose minimal base images matched to your needs.
As an example, in figure 5 above there is a Python repository shown, and you can certainly build your Python app and it will almost certainly work. That’s because the image tagged on Docker Hub is designed to be easy to work with across many different use cases and it’s well maintained. But there are more than 1000 other images in this registry.
Should you just use whatever comes in the easy-to-remember python or are there smaller images that would suit your needs and also reduce your security footprint? The answer, as you might surmise, is that there are almost certainly better choices from a security perspective.
Container image size matters and not just for portability and fast downloads. The image tagged python is simple to use because it comes with a fairly large footprint of pre-installed operating system libraries and developer packages. That means it will likely work great with a range of projects and will have everything you might need to compile code and dependencies; but vulnerability scanners might produce a long list of issues to track down, too.
Container image security best practices
In addition to selecting images from trusted, verified sources, you should consider selecting images with minimal operating system footprints.
This reduces the number of “moving parts” that can potentially harbor vulnerabilities. In an ideal world, each container image would have your code and just enough other packages to enable an app to run and nothing else. But in practical terms, you are going to have a large number of apps and need to find some common ground to make container images manageable.
Going back to the earlier Python example, there is another image in the python repository tagged, and the difference between the python and images is pretty stark when we look at the size and number of detected vulnerabilities. The slim image is just 12% of the size of the fuller Python image, and the result of a vulnerability scan on both images shows far fewer vulnerabilities and dependencies as a result, as shown in figure 6 below.
You might wonder about the fact that both images do still have vulnerabilities and in particular, they both have high severity vulnerabilities. What you find from looking at these particular vulnerabilities is that they are part of the underlying OS packages; none of them have fixes available, and none have known exploits in the wild. In addition, as part of Docker’s verified publisher process both images have been updated with the latest versions of all their packages within the past few days, so they are, in fact, well maintained.
We should also consider some context; often the vulnerabilities that do pop up are in development tools that you would likely want removed from the production version of your image: tools like curl, development libraries, or even from shells and package managers. But over time, the chances of new vulnerability discoveries affecting the larger Python image are much greater than with the slimmer image.
Putting container image security into practice example: base image selection
As we stated earlier, “start with slim images” is advice you can get almost anywhere. But one of the reasons Docker and Snyk have partnered is to enable you to go from advice to action. The integrated vulnerability scanning functionality in Docker Desktop can actually handle some of the work of base image selection for you!
We’ll continue with our python example to see how we might go from the python image to the python:3-slim-buster image using Docker vulnerability scanning feature, powered by Snyk. You can follow along with these steps on your own if you choose.
First, we’ll start with our simple case and use the python image to build a very simple container image. Here’s our Dockerfile to do this:
COPY hello.py /app
CMD [“python3”, “hello.py”]
It doesn’t get much simpler than that. The hello.py file is a very simple one-liner with a print (“Hello, World!”) statement.
Next, we’ll build the image and then run a scan on it:
$> docker build -t hello-python .
[+] Building 67.4s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.4s => => transferring dockerfile: 36B 0.1s => [internal] load .dockerignore 0.4s => => transferring context: 2B 0.1s => [internal] load metadata for docker.io/library/python:latest 1.6s => FROM [1/1] FROM docker.io/library/python 65.1s
=> exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:3a92e9... 0.0s => => naming to docker.io/library/hello-python 0.0s
$> docker run hello-python
$> docker scan hello-python -f Dockerfile
/ Analyzing docker dependencies for hello-python/Dockerfile
Organization: snyk-pmm Package manager: deb
Target file: Project name: Docker image: Base image:
Dockerfile docker-image|hello-python hello-python
Tested 431 dependencies for known issues, found 268 issues. Base Image Vulnerabilities Severity
python:latest 268 6 high, 34 medium, 228 low
Recommendations for base image upgrade:
Alternative image types
75 1 high, 10 medium, 64 low
python:3.9-rc-slim-buster 75 1 high, 10 medium, 64 low
First, note in the result the 431 dependencies and 268 issues found in the image. We have cut all the individual vulnerabilities for brevity – we’ll get to those in a bit. We didn’t really add anything interesting to the base python image so all 268 vulnerabilities come from the base, which you can see in the output as well.
But then at the end of the output, we get base image recommendations that can help us improve our security stance. Specifically,the python:3-slim-buster image shown earlier is listed. In fact, this is exactly how we arrived at our original comparison. We can already see that this new image will remove over 70% of the vulnerabilities we started with from the python image and get us down to one high severity vulnerability, but we will go ahead and build and scan again just to prove this out.
The Dockerfile is a simple change to the FROM line. We’ll save a separate copy called Dockerfile.slim .
COPY hello.py /app
CMD [“python3”, “hello.py”]
And then we can build and scan again with a slim tag so we can keep our images separate:
This time you can see there are just 94 dependencies in the full scan results and only one high severity vulnerability. That’s how Docker and Snyk can guide you to better base images. We don’t provide coverage for every image in Docker Hub, as you might imagine, but we do cover most of the popular official base images.
3. Manage all the layers in between the base image and your code
We went into some depth on the base images because they require some special considerations. You inherit whatever comes in the base image as you build up your own image on top of it, while a slim image reduces the security burden. But what about all those layers you add to the container? If you start with a slim image, chances are you’ll need to add tools and libraries, plus you’ll have your code and various things to install to make things work and all of those need to be monitored for vulnerabilities.
The good news is you directly control these middle layers, which we consider to be everything after the first FROM line and the final Dockerfile lines where you set up your code to run. More specifically, we’re interested in the RUN, COPY and ADD commands in Dockerfiles as these are the ones that will install things. Technically, your code might be somewhere in these middle layers, too, but philosophically we’re going to call your code the final layer, mainly because we already dealt with the code in Step 1.
One of the most difficult things about managing vulnerabilities in these middle layers is prioritizing what to pay attention to in the various stages of the lifecycle.
At each stage you might need different sets of tools, but as images head to production you should remove everything that isn’t absolutely necessary. Customizing your images by starting with a minimal base and then adding your tools makes it very easy to remove these tools later by simply taking them out of the Dockerfile and rebuilding, or even better, by using multi-stage builds to capture all these stages in a single, automated build process.
Prioritizing security vulnerability fixes in containers
With that said, you are still going to find security vulnerabilities and need to determine how to handle them. Getting to zero vulnerabilities is great in theory, but in practice it’s likely not practical or worth the time spent.
Here is a suggested starting point for the traditional working stages of development, testing, and production. Your software production processes are likely to be more complicated, but you can adjust accordingly.
Development images: these will likely have the most vulnerabilities in the middle layers because they likely need the most tooling and support packages. The good news is that IF you’re building images in stages and your production images do not include all these extras, you may be safe ignoring many of the vulnerabilities at this stage. Part of making this decision requires being able to track dependencies installed in the container and match that up with what you know is required for your inner loop development work. It’s quite normal to have a vulnerability in a library that gets installed as a dependency of a dependency of a dependency…you need to be able to determine if simply removing one of your development packages will clear up the vulnerability.
In the example below, we have a Ruby application and it is common to have SQLite bundled with Ruby to simplify development; but you probably wouldn’t use that same SQLite database in production. Knowing this, and armed with the right details from the container vulnerability scan, you can make the decision to ignore vulnerabilities in the libraries that get installed with SQLite in development. We’ll see how the Docker scan provides this information and additional details that make this task much simpler.
Test images: Test images aren’t much different in practice than development images, at least in terms of how to consider vulnerabilities. If you know a vulnerability is part of a test package that will not be in the production image you may elect to ignore it. This is a good point to do a comparison of scan results with the development stage, especially if you elected to ignore any severe vulnerabilities in the development stage. Are they well and truly gone in test? If so, your process is working. If not, it might be time to go back and adjust your development image or build steps.
Production images: These are the critical images as they’ll actually be running somewhere. Still, getting to 0 vulnerabilities, even when you slim down and remove as much as possible, may be a challenge. The goal, in many cases, is to automate the release process, so you certainly want to address high-risk vulnerabilities, especially those with known exploits. But part of the reason you also want to scan the development and test stage images is to reduce the number of surprises you have when you are ready to release. If you’ve handled the risk early, then perhaps the main function of production scanning is just to find new, late-breaking vulnerabilities.
Let us take a look at another example to see how you can use Docker and Snyk to help with your middle layers.
Putting it into practice example: Prioritizing user-introduced vulnerability fixes
In this example, we want to show some practical “middle layer” techniques you can use with the Docker vulnerability scanning capabilities powered by Snyk. First, we’re using a slightly more interesting application this time, available from https://github.com/jimcodified/dockercon2020, if you want to try things out for yourself. Within that repo you’ll find an alpha-blog directory and this is the app we’ll use. It’s a Ruby app but that’s not terribly important for these exercises.
1 . This sample app is from a DockerCon 2020 demo and there is a full step-by-step walkthrough available in the root directory of this repo.
Here’s our Dockerfile:
It’s still a pretty simple Dockerfile where we add several layers on top of our Ruby parent image:
The first RUN line adds in a couple of utilities to do some local development inside the image3
Then we follow that up by getting our core ruby components updated and ready to go
Next, our code is copied over
Followed by the installation of all our dependencies
2 . If you’re thinking “Installing git and vim in an image seems like a strange choice,” you are correct. Don’t do this. The full lab mentioned in the previous note goes into the history of this image and why these are here.
While it’s not terribly complex to see what’s happening, as you can imagine each of these Dockerfile lines ends up installing quite a bit, potentially adding new vulnerabilities to our image.
We can build this image and test it the same way we did before:
Clearly, whomever created this Dockerfile neglected to follow our advice in the previous section: 871 vulnerabilities and our parent image starts out with 867! We’ll have to track that person down and give them a copy of this guide. But our immediate focus is to figure out if we added vulnerabilities in any of our own Dockerfile commands. The difference between the total issues and the base image issues indicates there are at least four vulnerabilities for us to address.
The docker scan command can help us narrow this down pretty quickly by ignoring all the vulnerabilities from the base image via the –exclude-base option. Here’s another scan and a snippet of the output when we exclude the base image vulnerabilities:
3. The —exclude-base option requires the inclusion of the Dockerfile as part of the scan (the -f Dockerfile option we’ve been using)
This is a bit more manageable: 70 issues vs 871. If you scroll through the list of detected vulnerabilities, they also have a line that begins with Introduced in your Dockerfile by. We also have the dependency path that allows us to trace a particular vulnerability back to its source, but having the actual Dockerfile command gets us directly to the point of introduction.
Still, 70 vulnerabilities is a bit much to deal with all at once.
Quite often, what security and development teams want to focus on first is all the fixable, high-severity vulnerabilities.
We can get to that level of detail pretty easily as well by taking advantage of the JSON output option with a bit of filtering:
There we go! In our final view, there are 36 vulnerabilities listed, all of which are high severity with a fix available along with our Dockerfile command and the fix version. From here we should be able to fix these vulnerabilities. If you’re not familiar with the jq command this may look complex so here’s a quick overview of what we’ve done, but jq is pretty powerful and worth taking a little time to learn:
First, we added the --json output option to our dockerscan
Next, we get the vulnerabilities from the output
Then we select only vulnerabilities that have a fix and similarly vulnerabilities with a “high”severity
And finally, we tidy up the output a bit by only showing a handful of the vulnerability fields
Container security is a wide-ranging topic, and even narrowing the scope to just image security presents several security vectors to examine. But when it comes to securing your images here are the key points to reflect upon:
Start with base images from a provider you trust. Use digital signatures to verify authenticity.
When possible, opt for minimal base images that have only the basic operating system packages, your framework version of choice, and then build up from there.
Check your images for vulnerabilities early and often. Create your own approved base images that are actively maintained and pass all your security checks but scan again as new images get created.
Scan in multiple places in the software lifecycle: the desktop, in CI, stored images in registries, and the containers / pods actively running in your clusters.
When you are choosing tools to perform your scanning, look beyond the vulnerability list they provide:
Will the tool go beyond just reporting vulnerabilities and alert you to the fact that there may be a newer or better base image to use?
If a build fails due to vulnerability detections, will the tool provide developers and DevOps teams with enough information to fix the issues?
Does the tool provide the flexibility you need to set your security gates?
One size doesn’t always fit all – your developers will likely need more tools in an image that you’d allow in production, so you might have different images at each stage of the lifecycle. Automation, CI, and Dockerfiles that support these stages, provide appropriate security gates, and provide for the right balance of security and productivity, provide the most benefits.
Absolutely. But, like any infrastructure, they rarely come that way by default. Security is a holistic process, and containers are no different. However, the same processes and tools that were once used on traditional infrastructure might not be adequate to provide strong container security. Containers have changed the landscape of distributed systems, and new methods must be employed to secure them. There is a broad spectrum of container security solutions that can and should be employed to help provide the best possible security when it comes to containerized workloads.
How do I fix security vulnerabilities in containers?
For container images it’s a four-step process: take care of the vulnerabilities in your code and dependencies; choose base images with only what you need – start slim and add rather than deducing what to remove; evaluate the extra tools and packages you add – as containers progress closer to production the number of extras should be zero; and ensure the container is configured to run with as few privileges as possible.
How to secure a Docker container image?
Securing a Docker container image is a multi-faceted process. During the planning phase of an application, it should be ensured that the software doesn’t require any security anti-patterns to function correctly, such as running as root. Docker’s own documentation also provides a great starting point for how to deal with securing an image, specifically addressing the matter of trust. Trust controls can be implemented even on private registries, and is highly recommended. From there, ensuring base images only maintain a minimum profile of packages and dependencies, and utilizing static analysis tools to continuously monitor for vulnerabilities will help to secure your Docker images.
What is container scanning?
In the context of security, container scanning refers to any tool or process that scans a container or container artifact for potential security compromises, misconfigurations, or vulnerabilities. Scanning tools can encompass code, code dependencies, container configuration, and container runtime configuration, as well as other potential domains.
Docker Security Scanning Guide 2021
Docker has an enormous worldwide user base, recently surpassing 10 million users and 242 billion image pulls and has changed the way applications are built. With the accelerated development velocity that containerization enables, additional security responsibilities are shifting to developers, who now need to maintain container images in addition to their code. That’s why a...