Kubernetes is a portable, extensible, opensource platform for managing containerized workloads and services, which facilitates both declarative and immutable configuration and automation. It has a large, rapidly growing ecosystem and is the foundation of the CNCF landscape. There are many tools and services for Kubernetes.
Kubernetes is a complex system and getting started entails a significant learning curve. While the following Kubernetes best practices can help you get more out of Kubernetes, they also provide a framework to follow and implement a managed and stable cluster.
1. Use the Latest Version
Always have the latest stable version of Kubernetes in all production clusters. The new releases have many updates, additional features, and most importantly, patches to the previous version as well as addressing any security issues. This helps in keeping production clusters from experiencing any security vulnerabilities. Older versions also do not get enough support from the Kubernetes provider or the opensource community; thus, it is better to update all clusters on the latest version of Kubernetes.
2. Version Control All Manifest Files and Use GitOps
All manifest files related to deployment, ingress, services, and custom resource definitions (CRD) should be stored in a version control system before being pushed to a cluster. Doing so allow the tracking of who made any changes and the implementation of a change approval process to improve the cluster’s stability and security.
GitOps is a set of practices to manage infrastructure and application configurations using Git, an open-source version control system. GitOps works by using Git as a single source of truth for declarative infrastructure, applications, and microservices. The Git repository contains the entire state of the system so that the trail of changes to the system state are visible and auditable. GitOps is built around the software engineer experience and helps teams manage infrastructure using the same tools and processes they use for software engineering. Other than Git, GitOps gives you the ability to choose the tools that are used.
3. Use Namespaces
By default, there are three different namespaces initially defined in Kubernetes: default, Kube-public and Kube-system. Namespaces are very important in organizing a Kubernetes cluster and keeping it secured from other teams working on the same cluster. If a Kubernetes cluster is large with many nodes and multiple teams working on it, there need to be separate namespaces for each team. For example, there could be different namespaces for development, testing, and production teams. This way, a software engineer will only have access to the development namespace and will not be able to make any changes in the production namespace, even by mistake. Without this separation, there is a high chance of accidental changes by well-meaning team members.
4. Label Usage
A Kubernetes cluster includes multiple elements like services, pods, containers, networks, ingress, and service meshes. Maintaining all these resources and keeping track of how they interact with each other in a cluster is cumbersome and error-prone. This is where labels can be used. Kubernetes labels are key-value pairs that organize the various cluster resources.
For example, if there are two instances of one type of application or microservice. Both are similarly named, but each application or microservice is used by different teams, for development, and testing. This helps teams differentiate between similar applications or microservices by defining a label that uses their team’s identifier or name to demonstrate ownership.
5. Set Resource Requests and Limits
Occasionally deploying an application or microservice to a production cluster can fail due to limited resources available in that cluster. This is a common challenge when working with a Kubernetes cluster and it is caused when resource requests and limits are not set. Without resource requests and limits, pods in a cluster can start utilizing more resources than required. If the pod starts consuming more CPU or memory on the node, then the scheduler may not be able to place new pods, and even the node itself may crash or fail.
- Resource requests specify the minimum number of resources a container can use.
- Resource limits specify the maximum number of resources a container can use.
For both requests and limits, it is typical to define CPU in millicuries and memory is in megabytes or mebibytes. Containers in a pod do not run if the request of resources made is higher than the limit you set.
In this example, set the limit of CPU to 800 millicuries and memory to 256 mebibytes. The maximum request that the container can make at a time is 400 millicuries of CPU and 128 mebibytes of memory.
Have these best practices helped with your deployment and usage of Kubernetes? Hopefully, it will help in the management and stability of your Kubernetes cluster. Apply these best practices outlined in this blog and see the impact they will have on the cohesion and functionality of your Kubernetes cluster.