This is the third part of my Kubernetes Concepts Crash Course. See part one here: https://www.taos.com/resources/blog/kubernetes-crash-course-part-1/ and part two here: https://www.taos.com/resources/blog/kubernetes-concepts-crash-course-part-2/.
States of a pod:
- Pending: When the scheduler is attempting to figure out where to put a pod
- ContainerCreating: A pod has been scheduled on a node, images are pulled and the container starts
- Running: The pod is running
Pod Conditions are an array of true/false values that describe the status of a pod. They are:
By default, Kubernetes assumes that a pod is ready once it’s containers have been created. This could be an issue if the application inside of it isn’t actually ready. Kubernetes will send traffic to it, even if it is not ready to accept it.
A readiness probe is a custom test for when a pod is ready to accept traffic. Kubernetes will only forward traffic once this test is passed positively. This probe can take the form of an HTTP GET request, a check for whether a port is listening, or the execution of a custom script on the container.
Aliveness probe is a test that is sent to a container periodically to determine if it is healthy. If a container is considered to be unhealthy, it is destroyed and recreated. Liveness probes are configured in the same way readiness probes are.
Labels in Kubernetes are just key-value pairs that can be defined. They can be used to filter views, selected from Deployment groups, etc. Using the -l flag on kubectl get command lets you view object types with the label specified.
The default behavior of a pod in Kubernetes is to attempt to keep a container running, something like a web server, or database. If a container exits, Kubernetes will try to bring it back up repeatedly until the failure threshold has been met. This behavior is controlled by the restart policy key in the Kubernetes definition file. To create multiple pods that run a task to completion, Kubernetes jobs come in. They are like replica sets for pods that don’t need to run indefinitely.
A job will attempt to create the number of completions specified until it has had the same number of successes. The number of pods running at the same time can be set with the parallelism key.
Like a layer 7 load balancer that is built into the Kubernetes cluster. An Ingress controller is a reverse proxy that can forward requests to the right pods, cluster IPs, etc from outside the cluster. Also, need a NodePort Service to expose the Nginx service to the external web
Volumes can be defined and mounted to pods for data that needs to be persistent. hostPath defines a directory on the host node. Not good for multi-node clusters because nodes will have different things in the defined directories.
There are several other different types of volume types that can be defined, such as NFS, AWS EBS, etc.
A cluster-wide pool of volumes used by resources across the cluster. This is advantageous over volumes at the pod level because you can more centrally manage your volumes. If there is a change to the volumes that are defined at the pod level, every pod definition must be changed, however with a persistent volume, any changes to the volume are done in the volume object.
Persistent Volume Claims
A Persistent Volume Claim is a request to bind to a persistent volume. There is a one-one relationship between persistent volumes and persistent volume claims. By default, once a persistent volume claim has been deleted, the persistent volume it was attached to is retained. This means that it stays but cannot be re-used by other volume claims. The other options are delete, which deletes the volume when it is used and recycles which clears out the persistent volume and re-uses it.
A storage class lets you define a type of storage that can be referenced in a persistent volume claim. This will automatically create a persistent volume of the requested size from the type of resource (such as an EBS volume in AWS) specified in the storage class object.
Deploy a group of pods like deployments, but with some differences:
They deploy pods sequentially instead of simultaneously
The pods have definitive names, not randomized ones like in deployments. The names are the name of the stateful set and the index of the pod. name-0, name-1, etc.
Even if a pod fails, and is recreated, it will come up with the same name.
Like a ClusterIP service, but instead of load-balancing, gives a DNS address to each pod in a service. With a normal service, you can access the IP address/DNS name for the service and it will be sent to a random pod under it, a headless service will let you address each specific pod, even if they fail and are recreated. This can be useful for things like mySQL databases that have read and write pods.