5 Best Practices for Container security
19 juillet 2022
0 minutes de lectureContainer security is crucial because the container image includes all the components that will eventually run your application. If vulnerabilities are hidden within the image, the risk and potential impact of security issues in production increase. Therefore, it's essential to also monitor your production environment. Even if you build images without vulnerabilities or elevated privileges, keeping an eye on runtime activity remains necessary.
At a high level there are five key steps to creating a secure container image:
1. Secure your code and its dependencies
Containerization is a way to deliver cloud native applications faster, which is likely one of the reasons you’re creating containers in the first place. Containers have expanded the meaning of application code, but code remains the area that’s most directly controlled by developers. Open source dependencies can easily dwarf the amount of proprietary code, so it’s important to implement integrated scanning with software composition analysis (SCA) and static application security testing (SAST) tools to automate the process of analyzing code and dependencies. It’s also possible to scan containers to catch issues directly in Git commits and repositories, which likely better fits the development process.
2. Start with a minimal base image from a trusted source
While size matters for portability and fast downloads, it also reduces the number of moving parts that can potentially harbor vulnerabilities. Ideally, each container image would have your code and the minimum number of additional packages required to enable an application to run. In practical terms, however, you’re going to have a large number of applications and need to find common ground to make container images manageable.
For selecting a base image, there are many trustworthy vendors that host container base images. Docker Hub is by far the most popular, with more than 3.8 million available images, more than 7 million repositories, and about 11 billion pulls per month. Some are Docker Official Images published by Docker as a curated set of Docker open source and “drop-in” solution repositories. Docker also offers high-quality images that are directly maintained by Verified Publishers. Docker’s guidelines for these Verified Publishers are a great starting point for defining your own container image best practices.
It’s easy to go to Docker Hub and find publicly available images that match your use case, but it’s important to pay attention to their provenance, whether they are from Docker’s Official Images program, or if you can verify the source and contents using something like Notary to check digital signatures so you have some level of quality assurance.
3. Manage all the layers in between the base image and your code
Base images require special considerations: you inherit whatever comes in the base image as you build up your own image on top of it. Even if you start with a slim image, chances are you’ll need to add tools and libraries, in addition to your code and the necessary installations to make things work. All of these need to be monitored for vulnerabilities.
The good news is that you can directly control these middle layers. But, it’s important to prioritize where you focus your attention during development, testing, and deployment to production. You might need different tools at each stage, but as images head to production, you should remove everything that isn’t absolutely necessary.
Starting with a minimal base and only adding the necessary tools makes it easy to remove these tools later by simply taking them out of the Dockerfile and rebuilding, or by using multi-stage builds to capture all these stages in a single, automated build process. You may also discover vulnerabilities in tooling and support packages that are installed at the middle layer, but can be safely ignored if production images won’t include all those extras. Check out our blog post for some more best practices for multi-stage builds.
4. Use access management
In the context of containers, access means the ability for a given user to execute a specific operation over a given container resource. Typical activities fall under the general umbrella of create, read, update, or delete (CRUD) operations. The specifics of access management depend on the container platform. For example, in Kubernetes, users live outside the cluster, which means administrators need to manage identities outside the cluster using TLS certificates, OAuth2, or other methods of authentication.
Secrets and network access should operate on the principle of least privilege. Administrator access should be limited to building infrastructure. To prevent a container from having complete access to all your resources, you should assign specific roles and responsibilities to containers, then use tools to facilitate, enforce, and monitor these roles.
5. Securing container infrastructure
In addition to securing the container image or running container, it’s also necessary to manage the infrastructure stack necessary to run containers.This starts from a container registry, like Docker Hub, through production orchestration with Kubernetes.
Because container registries are designed to foster collaboration by creating a secure place to store and share containers, they have the potential to introduce vulnerabilities, malware, and exposed secrets. They often come with built-in security features, and a security protocol such as TLS should always be used when connecting with a registry. Likewise, Kubernetes includes tools for creating and enforcing security controls at both the cluster and network level. Check out our article on container registry security for more information.
Containers should always run on a secure system or cloud service. In the case of a service, role-based access controls (RBAC) should be used for accessing the registry.
One other point: attackers are focusing their attention earlier in the CI/CD pipeline, so don’t overlook securing those early development stages. As you build applications and containers, it’s important to scan code before its use in an application and before deployment. Furthermore, it’s key to use the principle of least privilege (POLP) and automate security checks and controls within the development pipeline.