Want to try it for yourself?
A container is a lightweight, isolated executable unit holding an application or its components. Developers can configure a container to run in any environment and on any machine that has a container runtime engine.
Containers allow us to make application code portable. We can package the code, operating system (OS) libraries, environment variables, dependencies, and configuration into a single unit. Then, we can deploy and run the code in different cloud or on-premises environments. Popular containerization tools include Docker Engine, containerd, and CRI-O.
What is Container Orchestration?
Container orchestration refers to automating container deployment, operations, and lifecycle management. This approach automates how we provision, deploy, scale, monitor, replace, and manage storage for our running containers. Kubernetes, Docker Swarm, Nomad, and Mesos are popular container orchestrators.
It’s challenging to manage containers at scale, especially when there are many containerized applications. Monitoring and managing applications using bespoke solutions is often a time consuming and error-prone endeavor. Using a well-tested container orchestrating platform mitigates both of these concerns through standardization and automation.
In this post, we’ll review why orchestration is necessary, explore some container orchestrators in detail, and discuss the benefits and security risks that come with container orchestration.
Before orchestrators became mature, containerized applications were often deployed using custom scripts and other deployment tools that usually were specialized to specific platforms and/or application packages. The standardization that containers provided enabled general purpose orchestrators to exist by putting all applications behind a common set of APIs and packaging specifications.
Before orchestrators, managing which containers were running and where to run them was a difficult task. This was especially true during the move to microservice architectures where dozens, if not hundreds, of containers need to be running at any given moment and may be upgraded multiple times a day.
Orchestrators take on the heavy lifting of running these containers across a cluster of hosts, and they provide standardized constructs for the process of scaling, upgrades, network provisioning and many other concerns. One of the key benefits they provide is the use of a declarative deployment model with which the developers and operators specify the desired application state. This model frees authors from worrying how and where to run the process, because the orchestrator does all of the work of creating and destroying the containers — and other entities — to satisfy the requested state wherever it makes sense in the cluster at any given moment. The orchestrators also continue to monitor what they have deployed and can automatically attempt to restore the desired state if something like an application crash or hardware failure causes the deployment to drift out of spec’.
Kubernetes: the dominant container orchestrator
Google introduced the open source Kubernetes container orchestration platform in 2015, largely based on work they had done on their internal container workload manager called “Borg”. By 2017, community support behind Kubernetes reached a level where it became the most popular choice, and the Cloud Native Computing Foundation (CNCF) was created alongside Kubernetes 1.0 and contributed to the Linux Foundation by Google and other founding members. It is home to containerd, Helm, Linkerd, Prometheus, gRPC, and other successful open source projects.
Kubernetes is platform-agnostic, supporting all operating systems that have a compatible container runtime engine. Several cloud providers now offer Kubernetes support, with the top providers — Azure, AWS, and Google Cloud — now offering Kubernetes-as-a-service solutions.In 2018, CNCF reported that 40 percent of global container management survey respondents used Kubernetes. By 2021, this percentage had increased to more than 80 percent.
Companies like Pinterest affirm that Kubernetes improves efficiency, creates faster project deployments, and saves time and money by managing hundreds of containers. According to a CNCF case study on Spotify, container CPU use improved two to threefold after adopting Kubernetes.
With many functions and an active open source community, Kubernetes handles extremely complex use cases and architectures. However, it might be a bit overpowered if you’re working with a few small, simple applications with infrequent deployments.
Docker Swarm: Swarm is an open source orchestrator built into the Docker runtime engine. Similar to other clustering platforms, it consists of one or more manager and worker nodes each, and uses a RAFT consensus model to share state between them.It doesn't have as much functionality as Kubernetes, but Swarm is easier to set up and install because it is built into the docker binary. It also syncs well with the Docker CLI and Docker Compose.
Apache Mesos: Apache Mesos is a UC Berkeley project. Airbnb, Apple, Twitter, and other large companies use this container orchestrator.Mesos is lightweight and seamlessly allocates resources, offering scalability and flexibility. However, it requires a great deal of technical know-how to use.
Red Hat OpenShift: OpenShift is a platform-as-a-service (PaaS) built on Kubernetes. It doesn't work on all operating systems and requires CentOS, Fedora, or the Red Hat Enterprise Linux Atomic Host (RHELAH).OpenShift ships with the authentication and authorization features that most distributions of Kubernetes lack, and its ease of use is also attractive to developers.
HashiCorp Nomad: Nomad is a platform focused on scheduling and management processes. It can, however, scale more container clusters than Kubernetes.
Rancher: Rancher is an open source platform from Rancher Labs that offers “Kubernetes-as-a-Service.” The platform provides a manageable, observable, and scalable solution for managing multiple Kubernetes clusters. It enables using a single deployment to provide consistency and security across a multi-cluster infrastructure, regardless of whether clusters are based on-prem, on the edge, in private or public cloud environments, or in any combination thereof.
Automated deployment and management: As the number of containers increases, manual management becomes increasingly difficult and error prone. Orchestration platforms standardize and automate these tasks, and because they function based on well-documented APIs and processes, operators can more easily monitor and troubleshoot these processes.
Controlled state: Container orchestrators like Docker Swarm and Kubernetes enable us to pre-define a desired state for the containers. The orchestratror’s job is to create and maintain the declared components in that desired state. This declarative model is much easier for developers to write than the imperative instructions that would exist if you were to write scripting for your deployments.
Installation and provisioning: Container orchestrators can handle the setup of new containers and application instances when traffic increases. This can eliminate the need to manually install and provision each new container instance as application use grows. Often, orchestration can use a standard API that scales the impacted workloads accordingly. Alternatively, a feature like the Kubernetes Horizontal Pod Autoscaler can automate this process.
Standardized release and rollback: Many container orchestration deployments standardize the release and rollback processes, including testing of various application versions, canary deployments, and blue/green releases.
Access control and authentication: Most orchestration platforms do not enforce stringent access control and authentication policies. Meaning that deployed containers are highly vulnerable to authentication and authorization threats — except when access control is in place. However, we can take steps to prevent authorization and authentication incidents, such as controlling access to the Kubernetes API.
Network and communication control: By default, all containers within a cluster can communicate with each other. While this makes developing distributed applications easier, it also allows malicious containers to attack other containers. Implementing the right network policies helps reduce this risk.
Additionally, tools like Snyk Infrastructure as Code (Snyk IaC) help developers scan and fix configurations before reaching production, helping keep the infrastructure secure.
Container orchestrators have become an industry standard. They help automate container deployment, installation, provisioning, and management.
Although container orchestration poses some security risks, the proper safeguards can allow organizations to reap the benefits of containerization, and save time while ensuring consistency and scaling automatically.
Keep learning about containers with these resources:
Everything you need to know about Container Runtime Security
In this article you will find everything you need to know about container runtime security, including how to keep your container images secure.Keep reading