Container security throughout the SDLC

Written by:

October 16, 2019

0 mins read

Containers are increasingly becoming the standard unit of software. The container image, technically defined in the OCI image specification, is a key component of modern tooling, from Docker to Kubernetes to platforms like AWS Fargate and Google Cloud Run. What does this mean for application security?

Where we use container images

One of the interesting things about container images is they span the software development life cycle (SDLC).

  • Images are built locally by developers, and provide a useful way of easily (sometimes too easily) distributing software.

  • Images are also built as part of continuous integration and continuous delivery pipelines. Metadata about the application can be attached to the image for better asset management.

  • Images are uploaded to registries, both public and private, from where they can be shared for deployment.

  • And finally, images are deployed to clusters, from dev/test to production, increasingly managed by Kubernetes or similar tools.

Each of these stages offers an opportunity to test the image for security vulnerabilities. But where is the best way to do so?

Where to test our images?

When it comes to securing our container images, it’s simple to think we need to test them at only one point along the SDLC. For instance, if you only have secure images in your registry then you’re secure, right? The reality is more interesting and, as a general rule, involves trade offs.

The closer to production you test, the more confidence you have that you understand the risks in your running applications. Unless you’re a software vendor building tools for use by others, you’re probably mainly interested in securing applications you are running in production right now. Testing late in the cycle is also useful when a new vulnerability is disclosed in a dependency you use. This way, you quickly assess which production app is affected and act with the appropriate urgency.

However, testing only at the end of the SSDLC likely means slow feedback cycles for developers. The developer who chose to use or modify an image went on to build on top of it, and has now moved on to doing other work, making it disruptive and expensive to make the required changes to close the security hole. Remember, the aim isn’t just to know what vulnerabilities you might have (which is important), but to fix them. Testing locally offers the fastest feedback cycle  but is dependent on every developer remembering to scan each time, which is hardly comprehensive or realistic.

Before you conclude that the pipeline is, therefore, the right place to test, it’s also worth considering the implementation cost. You may have one or two centralized container registries, allowing you to assess all the images you build, but you probably have far more continuous integration pipelines. Depending on your level of automation, and who owns the various tools in your organization, you may find it more expedient to start in one place or the other.

It’s notable as well that we have different levels of context at different stages of the SDLC. Testing locally, or in CI, likely means we have access to both the source code and the version control information. That might make detecting certain types of issues easier; for instance, those resulting from the compiler used, or unsigned commits from a malicious actor. It also helps with understanding how a library was introduced. However testing in production means we know exactly which images are in use and where, as well as how they are configured, which might help us prioritize the issues we have discovered.


The reality is that testing in only one place may address the needs of one function (be it developers, operators or security), but probably won’t completely address the overall business goal of finding and fixing vulnerabilities as quickly as possible. As a useful summary:

Testing for vulnerabilities at different stages of the SDLC







Great for debugging and building up knowledge among developers but requires individual developer action, and no way to enforce





Great as a gate, fast feedback for developers, requires per-pipeline implementation which will depend on how standardized pipeline management is in your organization. Possibly counter productive for low severity issues breaking the build, so need other feedback cycles as well.





Often a single owner, so easy to integrate and covers all first-party images, no matter how they were built. Potentially noisy because images may be unused.





An accurate picture of what you’re running, including third-party content, but potentially slow feedback cycles to development teams and the risk vulnerabilities can be exploited in running applications.




Which option you start with depends on the specifics of your organization. But the ideal to aim for is to have thorough testing of container-based applications throughout the SDLC. Relying on a single gate is too simplistic and is likely to lead to friction, either between developers, operators and security teams, or in how quickly you deploy applications.

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales