Day 18/40 Days of K8s: Liveness vs Readiness vs Startup Probes in K8's !! ☸️

Day 18/40 Days of K8s: Liveness vs Readiness vs Startup Probes in K8's !! ☸️

🔍 What is a Probe?

A probe is a mechanism for inspecting and monitoring something and taking necessary actions as required.

💁 Health Probes in Kubernetes

Health probes are used to inspect/monitor the health of pods to ensure:

  1. The application remains healthy.

  2. Pods are restarted if necessary (self-healing).

  3. Applications are always running with minimal user impact.

🌟 Types of Health Probes

✅ Liveness Probe

  • Checks if the application is healthy or not. It Monitors application periodically and restarts the application if unhealthy.

  • Throws an error of CrashLoopBackoff after multiple restart attempts.

✅ Readiness Probe

  • Though the application is healthy it doesn’t mean that the application is ready to accept the traffic immediately.

  • Readiness probe will check the readiness of the application to serve the content by accepting the requests, but it will not restart the pod like liveness probe

✅ Startup Probe

  • This probe is used for legacy apps, this ensures that application has enough time to start up before liveness probe begins checking its health. As soon as the startup probe confirms the application is ready, then liveness probe can be used to ensure the application remains live and healthy.

✳ Health Check Methods

All three health probes can perform health checks using:

  1. HTTP: Health probe will send an HTTP request to the endpoint periodically and if it receives the response back like response code between 200-400 then health check is successful.

  2. TCP: It will attempt to open a port against the container and upon successful responses health checks are successful.

  3. Command: Executes a command against the container and should receive a successful response with an exit code of 0

In real time we use the combination of these health probes, mostly liveness and readiness probes.

Configure Liveness, Readiness and Startup Probes Examples

  1. Command Health Check

    1. Create a Pod that runs a container based on the registry.k8s.io/busybox image.

       # The liveness probe command based health check will be performed against the container, and if 
       # it receives exit code 0, it means the check is successful.
       apiVersion: v1
       kind: Pod
       metadata:
         labels:
           test: liveness
         name: liveness-exec
       spec:
         containers:
         - name: liveness
           image: registry.k8s.io/busybox
           args:
           - /bin/sh
           - -c
           - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
           livenessProbe:
             exec:
               command:
               - cat
               - /tmp/healthy
             initialDelaySeconds: 5 #Wait for 5 sec before performing the initial health check
             periodSeconds: 5 #perform the health check for every 5 min
      
    2. Create the pod and view the pod events

       kubectl describe pod liveness-exec
      

The output indicates that no liveness probes have failed yet.

  1. Wait for 35 seconds, view the Pod events again:

    This indicates that liveness probes have failed, and causing the containers to be killed and recreated. This goes on and on like the container restarts by pulling the image, creating a file, and then deleting it after 30 seconds. During this time, the liveness probe health checks fail. After multiple restarts pod will enter into CrashLoopBackOff state as show below.

  1. HTTP Health check

    1. Create a pod using this yaml file

       apiVersion: v1
       kind: Pod
       metadata:
         labels:
           test: liveness
         name: liveness-http
       spec:
         containers:
         - name: liveness
           image: registry.k8s.io/e2e-test-images/agnhost:2.40
           args:
           - liveness
           livenessProbe:
             httpGet:
               path: /healthz
               port: 8080
               httpHeaders:
               - name: Custom-Header
                 value: Awesome
             initialDelaySeconds: 3 #Wait for 3 sec before performing initial health check
             periodSeconds: 3 #perform the http health check against th container for every 3 seconds
      

      Here Liveness Probe will perform health check by sending http request against the container on the specified path.

    2. View the pod events

       kubectl describe pod liveness-http
      

      The Pod events indicates that liveness probes have failed and the container has been restarted as the path /healthz doesn't exist inside the container.

TCP Health check

  1. Use the config file to create a pod

     apiVersion: v1
     kind: Pod
     metadata:
       name: goproxy
       labels:
         app: goproxy
     spec:
       containers:
       - name: goproxy
         image: registry.k8s.io/goproxy:0.1
         ports:
         - containerPort: 8080
         readinessProbe: # Readiness probe will perform readiness health check by openeing port 8080 agaisnt the container
           tcpSocket:
             port: 8080
           initialDelaySeconds: 10
           periodSeconds: 5
         livenessProbe: # Liveness probe will perform health check by opening a port 8080 against the container 
           tcpSocket:
             port: 8080
           initialDelaySeconds: 10 # wait for 10 sec before doing initial health check
           periodSeconds: 5 # perform health check every 5 sec 
           #Failure Threshold: This is the number of consecutive failed liveness probe attempts before Kubernetes considers the pod unhealthy and restarts it.
    

    This example uses both readiness and liveness probes, performs health checks by opening a port 8080 against the container.

  2. View the pod events

     kubectl describe pod goproxy
    

    Since the container is running on port 8080, both health checks are successful and pod is in running state.

  3. Let's change the Liveness probe port to 3000 now, and either forcefully recreate the pod using --force in the command or redeploy the pod again after removing the existing pod.

     kubectl apply -f liveness-tcp.yaml --force
     pod/goproxy configured
    
     kubectl describe pod goproxy
    

    As you can see, Liveness probe failed with error connection refused as it tried to open port 3000 against the container but container is running on port 8080

NOTE: The Kubelet continuously monitors both readiness and liveness probes independently.

#Kubernetes #ReadinessProbe #LivenessProbe #StartupProbe #40DaysofKubernetes #CKASeries