Kubectl restart pod: A Foolproof Guide to Pod restarts

By Eyal Katz August 28, 2023

Containerization is the most popular approach for all modern cloud deployment. This technology makes it possible to encapsulate application workloads as OS-agnostic containers. The result is a new breed of cloud-native applications running across any infrastructure, scaling up or down based on demand. Consequently, there is a need to orchestrate these containers to deliver maximum efficiency in workload execution. 

Kubernetes is the de-facto platform for container orchestration as over 60% of organization have adopted Kubernetes worldwide. However, Kubernetes offers much more than an orchestration middleware. It packages the containers as a cohesive set of deployment objects to deliver application-specific services. Behind the scenes, everything is managed via kubectl.

What is Kubectl and What is a Kubectl Restart Pod?

kubectl is a CLI (Command Line Interface) tool for administering and managing a Kubernetes deployment. A Kubernetes deployment consists of one or more Kubernetes clusters. kubectl leverages the Kubernetes API to interact with the clusters.

At a broad level, kubectl can perform operations to deploy Kubernetes applications comprising one or more clusters, including creating, updating, and deleting cluster resources. It can also inspect and view the cluster resources. One of the fundamental resources of a Kubernetes cluster is a Pod. Understanding the concept of a Kubernetes Pod is imperative to maintaining an efficient Kubernetes-based system. 

Let’s look at one such maintenance activity involving a Kubernetes Pod. More specifically, it involves administering the restart operations for the Pod, which is a crucial aspect of debugging a cluster. However, kubectl does not have a command for restarting a Pod, something like “kubectl restart pod”. That’s because the application’s functionality depends on the Pod’s health. Having an explicit command to restart a Pod increases the likelihood of unexpected application downtime caused by human oversight or error while executing the kubectl command.

But there is a workaround to perform restarts more efficiently and appropriately. Before getting into the specifics of it, let’s first understand the concept of Kubernetes Pod in little more detail.

What is a Kubernetes Pod? 

A Kubernetes Pod is the most granular unit of a Kubernetes resource. It encapsulates the application workload and gets launched as part of a Kubernetes cluster deployment. Internally, it contains one or more containers that execute the business logic as part of handling an application service. Pods are part of a Node that represents the underlying computing environment. 

Kubernetes Pods

Pods are always deployed as replicated sets to allow multiple instances of an application service to run simultaneously. This arrangement makes it easy to manage the scale and ensures the service’s availability at all times. Replications of Pods and their instances are managed via the control plane of the Kubernetes cluster. 

Kubernetes Pods follow a specific set of lifecycle phases. These phases represent the functional state of the pod.

Kubernetes Pods Lifecycle

Under normal circumstances, a Pod starts in the “Pending” phase,  followed by the “Running” phase, and continues to be in this phase as long as it does not move to the “Succeeded,” “Failed,” or “Unknown” phase.

While the lifecycle phases dictate the progression of the Pod’s acceptance into the cluster so that it can start serving requests, there are also a set of granular statuses for each Pod based on the state of its containers.

Kubernetes Pods Lifecycle

The conditions “Ready” and “ContainersReady” are most relevant during the “Running” phase. Others like “PodScheduled” and “Initialized” are relevant in the “Pending” phase of the Pod’s lifecycle.    

Overall, the lifecycle phases and the condition statuses provide a good indication of the Pod’s health and whether it can serve the application.

Why Should You Restart a Kubernetes Pod?

There are certain situations where the Kubernetes Pod has to be restarted. Broadly, these situations can be categorized as normal and abnormal situations.

Normal scenarios for Pod restart

Normal scenarios correspond to incidents in a Kubernetes cluster’s usual operations and maintenance.

  1. New release: Every release of the application entails the creation of a new container image. To ensure the Pod uses the new release, you need to restart the Pod after pulling the new image corresponding to the release.
  2. Configuration changes: Configuration changes are part of ongoing optimization. Sometimes they are also mandated as part of a new release.  These changes are part of the YAML specification for the pod deployment, defined in the form of ConfigMaps. Additionally, there are environment variables and secrets you’ll need to manage.  

Abnormal scenarios for Pod restart

Abnormal scenarios arise due to incidents that have the potential to cause impairment in application functionality.

  1. Debugging: During debugging, developers often have to reset the application to bring it to the initial steady state. Such circumstances mean you’ll need to restart the underlying Pods.
  2. Pod getting hung: Sometimes, the Pods do not transition to the normal state where they can start serving requests. Some of the possible scenarios are:
    1. The Pod is stuck in the “Pending” phase due to incidents, such as delays in Init container execution, underlying resource crunch, or network issues.
    2. The Pod is stuck because containers are in the “Waiting” state due to a problem in the container image.
  3. Undesirable termination: During termination, the Pod undergoes deletion, and Kubernetes waits for a certain time for the Pod’s containers to exit naturally. However, sometimes the Pod gets stuck in a terminating state due to issues in the underlying cluster node despite all containers exiting.
  4. Resource hogging: All containers within Pods have defined resource limits for CPU and memory. In case of abnormal resource utilization within the application, these limits are breached, resulting in issues such as “out of memory”. 
  5. Abnormal state transition: There is always a possibility of unforeseen situations arising due to mistaken deployments or errors in the underlying hardware and storage volumes, which causes the Pod to transition from the “Running” phase to the “Failed” or “Unknown” phase. Such scenarios lead to the Pod becoming unresponsive or crashing frequently. 
Kubernetes over Docker Swarm Meme

How to Restart a Pod Using Kubectl?

As stated earlier, there isn’t a direct way of restarting a Pod in kubectl command options. Instead, kubectl offers a few different approaches for different requirements. Here are some of the approaches to restarting a Pod.

Rollout restart

The kubectl rollout restart command forces Kubernetes to create a new deployment which triggers an update. As part of this procedure, it updates all the Pods, one by one, in a controlled manner, ensuring that the application remains available, thereby minimizing the risk of downtime. This approach is best suited for production environments, where deployments are done with replica sets.

Scaled restart

The kubectl scale command scales the number of Pods. It starts new Pods or gracefully terminates existing Pods based on the number of replicas specified in the command.

For example, the command:

kubectl scale --replicas=3 deployment/my-app

This command will set the number of replicas for the my-app Deployment to three. Kubernetes will then ensure three replicas of the Pods are assigned for the my-app Deployment. If there are currently fewer than three, it will start new Pods, and if there are more than three, it will terminate the extra Pods.

This command does not restart all the Pods. It simply adjusts the number of replica Pods for a specific deployment, stateful set, replica set, or replication controller. Therefore it should not be used for Pod restarts under abnormal situations.

Pod deletion

It is possible to restart a Pod by deleting it with the kubectl delete Pod command. Deleting the Pod will kick in the current deployment configuration to be applied by Kubernetes to restart and redeploy the Pod. This approach is suitable during the development phase when developers are unit testing the containerized application logic running inside the Pod and want to restart it to debug issues.

Environment update

The kubectl set env command updates the environment variables for a Kubernetes resource, such as a Deployment, ReplicaSet, or StatefulSet. While this command doesn’t explicitly restart any Pods, it updates the deployment configuration to include the environment variables. This, in turn, triggers a rollout resulting in the Pods being restarted with the new environment variables.

This approach is similar to the rollout restart since, internally, all the Pods are restarted in a controlled manner by Kubernetes. It can also be used as a hackish way to force a restart in an unusual situation where a restart is necessary to mitigate a security incident.  

Kubernetes meme

Unlocking a Holistic Strategy for Pod Restarts

As explained through the various options of kubectl command, there are several ways of restarting a Pod. But there needs to be a strategy for performing Pod restarts, and it can be based on a few important considerations:

  1. Service impact: Whether the restart will impact the service availability in partial or complete downtime for a given duration.
  2. Project environment: Whether the restart is part of the development, integration, staging, or production environment within the project.
  3. Urgency for restart: Whether the restart is required to safeguard a critical function of the application from being compromised.

Service impact is always applicable in the case of a production deployment, where downtime is not an option. Rollout restart is the best option for such a case. However, scaled restart or pod deletion can be employed if the restart is required due to an abnormal scenario. It may cause a partial downtime depending upon the incoming user traffic to the application.

As for the project environment, development, integration, and staging do not serve actual users. Therefore you can use any of the kubectl restart options. In the case of the production environment, the restart option is primarily governed by the service impact.

The urgency of restart depends on the criticality of an exceptional situation that mandates some code or configuration changes in the Kubernetes deployment. Security incidents and the discovery of vulnerabilities are major causes of such situations. This is mostly applicable to production environments. However, to unearth such issues earlier in the development cycle, it is important to have a set of security strategies and tooling for Kubernetes in place.

With Spectral, it is easy to apply security checks for Kubernetes across all environments, starting with continuous monitoring of source code and static application security testing, to secure code from day zero of development to production deployment. 
Try Spectral for free today.

Related articles

Cloud Risk Management: The DevOps Guide

Cloud Risk Management: The DevOps Guide

For DevOps software developers, navigating the cloud landscape without a clear understanding of risks is equivalent to walking into a minefield blindfolded. Cloud risk management, therefore,

GitHub actions vs. Jenkins for CI/CD Pipelines

GitHub actions vs. Jenkins for CI/CD Pipelines

There’s an age-old saying you can tell an engineer’s age by their preferred CI/CD (continuous integration and continuous delivery) tool. Depending on who you talk to,

Container Runtime Security: What is it and how to set it up?

Container Runtime Security: What is it and how to set it up?

Containers have quietly become indispensable in the modern application deployment stack, revolutionizing how we build, ship, and run applications. However, with their widespread adoption comes a

Stop leaks at the source!