Want to try it for yourself?
The use of containers has grown exponentially over the past several years. While container technologies have existed for decades, it was the launch of Docker in 2013 that made it more practical for organizations to adopt a container-first development and operations model.
Along with this growth comes security risks. With millions of available images to choose from, securing containers is a dedicated discipline. There are many layers of security that apply to containers, such as:
The container image and software inside
The interaction between the container, host operating system, and other containers on the host
The host operating system
Container networking and storage repositories
The runtime environment, often in Kubernetes clusters
As each layer deserves a guide of its own, this guide focuses on the first aspect: the image and your code. A single container image can contain hundreds or thousands of vulnerabilities, which can expose your organization to security incidents (like a breach) and lost productivity (from the amount of time it takes to triage and assess vulnerabilities as the number of images in use grows).
Application security was traditionally the responsibility of dedicated security teams, but the way containers are defined and built means that responsibility increasingly falls in the hands of developers and DevSecOps teams.
This shift in responsibility, along with the speed at which containers can be updated and deployed, requires a practical security methodology that goes beyond scanning your containers in CI” and following image best practices to address a scan that reveals hundreds of vulnerabilities.
What is container security?
Container security is the process of implementing security tools and processes to provide strong information security for any container-based system or workload — including the container image, the running container, and all the steps required to create that image and get it running somewhere.
We’ve previously created a guide for container security with Docker. Check out our 3 practical steps to secure a container image for more hands-on guidance. In this post, we’ll give an overview of the DevSecOps practices organizations are using to build safer container images and running containers, and introduce the technical tooling — such as Snyk Container, Snyk IaC, and our partnership with Sysdig — that provides comprehensive container security from development to runtime.
Container security is important because the container image contains all the components that will, eventually, be running your application. If there are vulnerabilities lurking in the container image, the risk and potential severity of security issues during production increases. To that end, you want to monitor production as well. You can create images with no vulnerabilities or elevated privileges, but you still need to monitor what’s happening in runtime.
Docker container security
Docker's enormous user base — with tens of millions of users and hundreds of billions of image pulls — shows that containerization is changing how applications are built. The responsibility for security is increasingly shifting to developers. It's important to scan Docker images before pushing them to Docker Hub or other registries in order to find and fix vulnerabilities in Linux packages, user permissions, network configurations, open source tools, or access management. Such a scan can help you uncover and remediate vulnerability issues in your application and infrastructure before you ship.
Kubernetes container security
Kubernetes offers a myriad of security controls to help make your clusters, workloads, and containers safer. It's important to note that Kubernetes requires self-configuration since none of the security controls are configured when you deploy Kubernetes. Furthermore, while Kubernetes offers controls and features to help create a secure cluster, the default security configurations often aren’t enough. Deploying workloads securely requires expertise in Kubernetes. Check out our page on Kubernetes security best practices to learn more.
GKE container security
Google Kubernetes Engine (GKE) provides many tools to secure workloads. It’s good to take a layered approach to GKE security by configuring security features for access controls, workloads, and other security aspects. GKE can be run in standard mode, where you manage the underlying infrastructure, and autopilot, where GKE provisions and manages the infrastructure. The Snyk Container Kubernetes integration allows customers to secure workloads on GKE, in either standard or autopilot, uncover vulnerabilities in both container images and application code, and scan your Kubernetes configurations for issues.
AKS container security
Microsoft Azure Kubernetes Service (AKS), like GKE, comes with robust security features, such as integration with Azure Policy and consistently fast updates/patches. However, it requires a semi-manual process to upgrade cluster components to newer versions, and requires network policies to be enabled when creating the cluster. As with GKE, Snyk can scan your Kubernetes configurations and containers, and enable automatic monitoring as you deploy AKS resources.
EKS Container Security
Amazon Elastic Kubernetes Service (Amazon EKS) has a strong set of security features by default, and operates on the AWS shared responsibility model — which defines who is responsible for the different elements of container security. Usually, AWS is responsible for security "of" the cloud whereas you, the customer, are responsible for security "in" the cloud. As with other Kubernetes options mentioned above, Snyk integrates with Amazon EKS and Amazon Elastic Container Registry (Amazon ECR) easily, to scan your Kubernetes configurations and containers, and enable automated monitoring as you deploy to Amazon EKS.
At a high level there are five key steps to creating a secure container image. We will consider these in a bit more detail to see how this approach can create secure container images:
Containerization is a way to deliver cloud native applications faster, which is likely one of the reasons you’re creating containers in the first place. Containers have expanded the meaning of application code, but code remains the area that’s most directly controlled by developers. Open source dependencies can easily dwarf the amount of proprietary code, so it’s important to implement integrated scanning with software composition analysis (SCA) and static application security testing (SAST) tools to automate the process of analyzing code and dependencies. It’s also possible to scan containers to catch issues directly in Git commits and repositories, which likely better fits the development process.
While size matters for portability and fast downloads, it also reduces the number of moving parts that can potentially harbor vulnerabilities. Ideally, each container image would have your code and the minimum number of additional packages required to enable an application to run. In practical terms, however, you’re going to have a large number of applications and need to find common ground to make container images manageable.
For selecting a base image, there are many trustworthy vendors that host container base images. Docker Hub is by far the most popular, with more than 3.8 million available images, more than 7 million repositories, and about 11 billion pulls per month. Some are Docker Official Images published by Docker as a curated set of Docker open source and “drop-in” solution repositories. Docker also offers high-quality images that are directly maintained by Verified Publishers. Docker’s guidelines for these Verified Publishers are a great starting point for defining your own container image best practices.
It’s easy to go to Docker Hub and find publicly available images that match your use case, but it’s important to pay attention to their provenance, whether they are from Docker’s Official Images program, or if you can verify the source and contents using something like Notary to check digital signatures so you have some level of quality assurance.
Base images require special considerations: you inherit whatever comes in the base image as you build up your own image on top of it. Even if you start with a slim image, chances are you’ll need to add tools and libraries, in addition to your code and the necessary installations to make things work. All of these need to be monitored for vulnerabilities.
The good news is that you can directly control these middle layers. But, it’s important to prioritize where you focus your attention during development, testing, and deployment to production. You might need different tools at each stage, but as images head to production, you should remove everything that isn’t absolutely necessary.
Starting with a minimal base and only adding the necessary tools makes it easy to remove these tools later by simply taking them out of the Dockerfile and rebuilding, or by using multi-stage builds to capture all these stages in a single, automated build process. You may also discover vulnerabilities in tooling and support packages that are installed at the middle layer, but can be safely ignored if production images won’t include all those extras. Check out our blog post for some more best practices for multi-stage builds.
In the context of containers, access means the ability for a given user to execute a specific operation over a given container resource. Typical activities fall under the general umbrella of create, read, update, or delete (CRUD) operations. The specifics of access management depend on the container platform. For example, in Kubernetes, users live outside the cluster, which means administrators need to manage identities outside the cluster using TLS certificates, OAuth2, or other methods of authentication.
Secrets and network access should operate on the principle of least privilege. Administrator access should be limited to building infrastructure. To prevent a container from having complete access to all your resources, you should assign specific roles and responsibilities to containers, then use tools to facilitate, enforce, and monitor these roles.
In addition to securing the container image or running container, it’s also necessary to manage the infrastructure stack necessary to run containers.This starts from a container registry, like Docker Hub, through production orchestration with Kubernetes.
Because container registries are designed to foster collaboration by creating a secure place to store and share containers, they have the potential to introduce vulnerabilities, malware, and exposed secrets. They often come with built-in security features, and a security protocol such as TLS should always be used when connecting with a registry. Likewise, Kubernetes includes tools for creating and enforcing security controls at both the cluster and network level. Check out our article on container registry security for more information.
Containers should always run on a secure system or cloud service. In the case of a service, role-based access controls (RBAC) should be used for accessing the registry.
One other point: attackers are focusing their attention earlier in the CI/CD pipeline, so don’t overlook securing those early development stages. As you build applications and containers, it’s important to scan code before its use in an application and before deployment. Furthermore, it’s key to use the principle of least privilege (POLP) and automate security checks and controls within the development pipeline.
With millions of container vulnerabilities in the wild, finding, prioritizing, and remediating vulnerabilities can be overwhelming to developers. Snyk Container cuts through the noise of typical vulnerability reports by detecting and fixing application and container vulnerabilities together, even if you don’t have access to the original source code running in your containers.
Snyk Container continuously scans for new vulnerabilities, prioritizes fixes based on context and exploitability, uncovers issues in open source dependencies, and matches vulnerabilities to Dockerfile commands to make it easier for developers to introduce fixes. Snyk Container was used to conduct over 130 million container tests in 2021, and 56 million vulnerabilities were fixed by Snyk Container users.
Working in tandem with Snyk Infrastructure as Code to secure configuration for containers, Snyk Container integrates with many Kubernetes platforms including AKS and GKE, container registries such as Docker Hub, GCR, and Quay, and container base operating systems including Amazon Linux and Ubuntu, and many more. Integrated vulnerability scanning helps developers identify and use suitable minimal base images and automates the update process to quickly eliminate vulnerabilities.
Snyk Container, like the rest of the Snyk platform, is built with a developer-first approach and supports the DevSecOps culture. It integrates into the IDE, scans pull requests before merging, gives guidance for fixes, and applies automated tests to CI/CD pipelines. Once containers are running, it continuously monitors deployments for exposure to existing or newly disclosed vulnerabilities. Alerts are then sent via Slack, Jira, email, or other methods, to help DevSecOps quickly identify and remediate vulnerabilities.
Snyk Container’s early image feedback can help cut out 70% or more of vulnerabilities — but potentially 30% of runtime vulnerabilities remain unaccounted for. These can add up to hundreds of vulnerabilities across thousands of containers and numerous clusters. Finding and fixing these vulnerabilities can be daunting. It’s difficult to tell which packages are used in a running container, and which vulnerabilities affect those packages executed in runtime. The security and operations teams responsible for managing live environments have to uncover vulnerabilities, before involving development teams to fix them. And since developers lack systems expertise, legacy vulnerability tools may take months to find and fix issues.
Snyk partnered with Sysdig to help resolve issues in runtime environments by giving more context on what is affecting the runtime environment. This partnership creates a security solution that spans the entire DevOps process. The combination of Snyk and Sysdig platforms secures everything from code in the developer environment to the infrastructure running the cluster.
Check out this announcement post to learn more about how the Snyk-Sysdig partnership extends container security to the runtime environment.
Container security is a broad topic, and even limiting the scope to base image security presents numerous challenges to consider. When it comes to securing your images, here are a few key points to keep in mind:
Start with base images from a trusted provider, and use digital signatures to verify authenticity.
Opt for minimal base images with only the basic operating system packages and your framework of choice, then build up from there.
Check images for vulnerabilities early and often.
Scan throughout the software lifecycle: the desktop, in CI, stored images in registries, and the containers running in your clusters.
Choose scanning tools that go beyond basic spreadsheet-style vulnerability reporting, but instead, provide mitigation advice, recommend base images, give developers information to fix any issues, and provide the flexibility you need to set your security gates.
If you’re interested in securing your container images across their lifecycle, Snyk Container automates container security in a developer-first manner, providing the right balance of security and productivity so you can build more secure images and running containers.
Container scanning: The process of finding vulnerabilities in containers by scanning packages and dependencies in a container image
Container monitoring: The collecting of metrics and health tracking the health of containerized applications and architectures
Kubernetes: An open source system originally developed at Google for orchestrating containerized applications across a cluster.
K8s: An abbreviation for Kubernetes.
Docker: The most popular container platform in the world. Docker democratized the tooling and processes around container creation and execution, empowering developers to use these technologies easily.
Dockerfile: A text file containing the configurations needed to build a Docker image.
Google Kubernetes Engine (GKE): GKE is Google's managing service offering for running Kubernetes workloads on Google Cloud.
AKS: Microsoft Azure's managed Kubernetes service. It began as a more generic Azure Container Service then evolved into AKS when Kubernetes became the dominant container orchestration platform.
AWS EKS: Amazon Web Service’s managed Kubernetes service for running Kubernetes workloads on AWS. AWS EKS can be used on its own, or with other services like AWS Fargate. AWS Fargate essentially hides all Kubernetes infrastructure, allowing users to focus only on their own Kubernetes pods.
Container image: A static file that contains a set of instructions for creating a running container
Container Registry: A container image repository and management tool that makes it easy to store and share container images.
Container runtime: Software that creates, runs, and manages containers on a host operating system.
Shift left: A culture and set of tools that incorporates security into developer workflows
DevSecOps: Short for development, security, and operations, DevSecOps is an approach that automates security in the software delivery lifecycle.
Are containers secure?
They can be, but they rarely come that way by default. The processes and tools that were once used on traditional infrastructure might not be adequate to provide strong container security. Containers have changed the landscape of distributed systems, and new methods must be employed to secure them. There is a broad spectrum of container security solutions that can, and should, be employed to provide the best possible security for containerized workloads.
How do I fix security vulnerabilities in containers?
Fixing security vulnerabilities in containers is a four-step process. First, take care of the vulnerabilities in your code and dependencies. Second, choose the minimum base images for what you need — start slim and build up. Next, evaluate the extra tools and packages you add. As containers progress closer to production the number of extras should be zero. Finally, ensure the container is configured to run with as few privileges as possible.
How to secure a Docker container image?
To secure a Docker container image, steps should be taken to ensure it doesn’t require any security anti-patterns to run correctly, such as running as root. Docker’s documentation provides a great starting point, specifically addressing trust. Trust controls can be implemented even on private registries. Ensuring that base images maintain a minimum profile of packages and dependencies, and utilizing scanning tools to monitor for vulnerabilities, will further help secure Docker images.
What is container scanning?
Container scanning is the use of tools and processes to scan containers for potential security compromises. It’s a fundamental step towards securing containerized packages. Scanning tools can encompass code, transitive dependencies, container configuration, and container runtime configuration, among others.
Everything You Need to Know About Container Scanning
As containerized deployments grow in popularity, it’s crucial to understand how to secure them. Learn everything you need to know about container scanning.Keep reading