Kubernetes Security: Common Issues and Best Practices
13 min read
Kubernetes security doesn’t disappoint. Offering a host of security controls, Kubernetes (K8s) can help make your clusters, workloads, and containers safer. By following kubernetes security best practices and knowing how to address kubernetes security issues, you’ll be able to take full advantage of the many benefits K8s offers while still keeping your environment secure.
Kubernetes security issues
While a number of Kubernetes security issues exist, the three most important to consider are:
Requires self-configuration: When deploying Kubernetes yourself from open source, none of the security controls are configured. Figuring out how they work and how to configure them is entirely the operator’s responsibility.
Deploying workloads securely requires expertise: Whether using a Kubernetes distribution with pre-configured security controls or building it yourself, developers and application teams that may not be familiar with all the ins and outs of Kubernetes may struggle to properly secure their workloads.
Lack of built-in security: While Kubernetes offers access controls and features to help create a secure cluster, it lacks built-in security to ensure the containers and code running on the cluster are safe.
Check your configurations for free
Easily find and fix vulnerabilities in containers and Kubernetes applications
Some areas to consider:While Kubernetes built-in security solutions do not cover all issues, there is no shortage of choices within the K8s security solutions ecosystem.
Some areas to consider include:
Workload security: The majority of Kubernetes workloads are containers running on Docker engines. While, in some cases, you might be using other container solutions (e.g., CRI-O or Containerd) in parallel, no matter which engine is running in the back end, you’d still be running containers. The code and other packages in those containers must be free from vulnerabilities.
Workload configuration: Whether using Kubernetes YAML, Helm Charts, or templating tools, the configuration for deploying your applications in Kubernetes is typically done in code. This code affects the Kubernetes security controls that determine how a workload runs and what can or cannot happen in the event of a breach. For example, limiting each workload’s CPU, memory, and networking to the maximum expected use will help to contain any breaches to the affected workload and ensure other services would not be compromised.
Cluster configuration: There are a number of Kubernetes security assessment tools available for your running clusters. Among other features, these tools check for adherence to Kubernetes security best practices and CIS and other relevant benchmarks.
Kubernetes networking: Securing the network plays a major role when it comes to Kubernetes. Pod communications, ingress, egress, service discovery, and—if needed—service meshes (e.g., Istio) should all be taken into account. Once a cluster has been breached, every service and machine in the network are at risk. It is therefore important to ensure your services and the communication between are isolated to only what is needed. This combined with the use of cryptography to make your machines and services private can also help contain the threat and prevent a major network-wide breach.
Infrastructure security: As a distributed application run across many servers (using physical or virtual networking and storage), securing your Kubernetes infrastructure—particularly the master nodes, databases, and certificates—is crucial. If a malicious actor has successfully breached your infrastructure, they could gain access to everything needed to access your cluster and applications as well.
Cloud-Native Security with Kubernetes Container Security
In Kubernetes, the pod serves as the fundamental work unit. In most cases, a Kubernetes pod is just a container, though it could also be multiple containers. While Kubernetes security is able to control how the pod operates, it does not inspect the containers to ensure they are safe and approved to run. Rather, this task—including adding tooling for this purpose—falls solely on the user.
Container Security Best Practices
Following these key container security best practices will help ensure the security of your clusters and workloads:
1. Secure Your Container-Based Images
Your workload inherits everything that comes with the base image you choose to build upon. You should therefore select minimal base images, adding only what is needed.
2. Secure Your Code and Dependencies
Secure your code and dependencies through continuous scanning. Snyk Open Source performs a full dependency analysis for code to uncover open-source dependencies and any vulnerabilities they contain and helps developers fix them automatically. Ideally, the tool should be aware of your base image and alert you when there is an updated version or an alternative that can reduce vulnerabilities.
The Docker Engine generally comes with sensible defaults. Moreover, if you’re using a Kubernetes platform distribution (e.g., OpenShift, VMware Tanzu/PKS, AKS, EKS or GKE), the container runtime will already be locked down. But Kubernetes security for the workload configuration is the responsibility of the user.
At a minimum, you should have policies for workload security and resource controls that are agreed upon by developers, operators, and security teams. Ideally, since the workload configuration is code-based, it should be tested in your continuous integration pipelines, just like any other code.
With so many Kubernetes security considerations, it can be difficult to know how to get started and stay secure.
These three tips will help to ensure Kubernetes security:
People and Process Are Critical
Use a Supported Kubernetes Distribution Service
1. People and Process Are Critical
While the technical aspect of security is critical, your people and processes are just as important. Running containers and Kubernetes impacts the entire IT and development chain—developers, security, infrastructure, and operations teams.
For this reason, it’s best to small and build your knowledge base and core experts across disciplines. But don’t try to do it all on your own. Take advantage of the vast Kubernetes community, third-party tools, and Kubernetes service providers with expertise in rolling out K8s. These partners can also provide ongoing Kubernetes security assessments to ensure you’re keeping up with the latest best practices.
2. Use a Supported Kubernetes Distribution Service
Using a supported Kubernetes distribution from a vendor you trust is almost always preferable to attempting to set it up for your production environment on your own. With over 90 certified conformant Kubernetes distributions, they ensure built-in platform security for role-based access control and more.
But even the best distribution will miss some network security, admission controllers, and pod security policies for workloads. While choosing the right distribution for your needs is critical for Kubernetes security, this does not eliminate the need to check for Kubernetes and container security vulnerabilities or misconfigurations.
3. Kubernetes Security Tools for Monitoring Workloads
Kubernetes is an orchestrator and set of APIs that can be used to build and run diverse workloads, but it cannot serve as a standalone solution for most production environments. Rather, it relies on configurations and third-party tools to reach optimal security standards. Layering the tools can help complete this picture.
Consider using the following kubernetes security tools to monitor running workloads:
Behavioral analysis and network monitoring tools: Every application follows a pattern. But changes (e.g., a new version, marketing campaign, your tool going viral, or a security breach) can cause it to deviate from this pattern. Understanding these anomalies and their origins is essential for quickly mitigating any security breaches. The downside is experienced operators are needed to actively monitor your tools and so this can be difficult or expensive to implement. Even an advanced application that detects anomalies requires specialized personnel to be able to decipher these alerts and decide whether action is needed.
Logging and monitoring tools: These tools also fall under the behavioral analysis category. By nature, a microservice platform uses many different services that split your container logs. A request can also move between several services until its completion. Without specialized tools that capture and store all your logs in a centralized environment, it is therefore difficult to achieve a holistic view of single requests and detect if something is off. Logging and monitoring tools work differently in containers, and especially in Kubernetes, and as such often require tools and processes built for these environments.
Networking and storage tools: These are handled by plugins as opposed to built in. Your distribution will provide defaults, though, in most cases, you will be able to choose other settings. For example, you may at some point discover the need for a service mesh.
4. Find & Fix Vulnerabilities and Secure Running Workloads
Secure your running workloads to reduce your blast radius by eliminating security vulnerabilities in your application code, dependencies, and containers. Because many of these security issues relate back to code of some sort—applications, container build files, or workload configurations—a list of vulnerabilities and issues alone isn’t enough. Ensure the developer and DevOps teams responsible for fixing these issues know how to handle this.
Here are a few best practices for handling and avoiding such issues when using Kubernetes:
Don’t run containers with root and avoid running privileged pods: If somebody gets into the container, you want to limit access to the rest of the system.
Set limits on container resources: This can prevent denial-of-service (DoS) attacks if a container is breached.
Secure pod configuration YAML and use Kubernetes pod security policies: This may seem redundant: If your pod security policy says containers cannot run as root, why bother explicitly setting this in the workload configuration? There are two key reasons:
Kubernetes pod security policies are applied at runtime: when a developer goes to deploy a workload and it gets all the way to production only to fail the pod security policy, this can be extremely frustrating. But enforcing this in the workload configuration means the policy is enforced anywhere the workload is running—even a single node setup on the developer’s workstation.
You may grow to the point in your Kubernetes usage where you’re using multiple Kubernetes distributions, each with its different built-in settings. Hopefully you’ll be able to set pod security policies everywhere, but having the security built in to the configuration of the workload provides a layer of insurance.
Why Kubernetes security Is a big deal
While every application and platform must be properly secured, there’s far more security buzz around Kubernetes than other software platforms. Why is this the case? First, Kubernetes can be used by both small applications (even those running on a local development machine) or ones with huge clusters containing up to 5,000 nodes—each requiring different security controls and policies. Second, compared to other software platforms, with security a primary feature of every aspect of its design, Kubernetes makes security easy.
Test your application for vulnerabilities
How did the Department of Defense move to Kubernetes and Istio?
| Keynote | Nicolas Chaillan, Chief Software Officer, US Airforce Discover how the largest organization in the world move to DevSecOps by adopting Kubernetes and Istio to move at the pace of relevance. The Department of Defense is using Kubernetes on jets and various systems and have a hundred thousand people to train each year....