Kubernetes

Master Kubernetes Node Commands: The Ultimate kubectl Commands and Cheat Sheet for Node Operations

Explore essential Kubernetes commands for efficient cluster management, including kubectl get, exec, and rollout, vital for system admins and devs.
December 22, 2023

Welcome to the dynamic and evolving landscape of Kubernetes, an orchestration powerhouse that has revolutionized how we deploy, manage, and scale applications in the cloud. As Kubernetes continues to be the backbone of containerized environments, understanding and mastering its commands is critical for developers, system administrators, and IT professionals.

In this comprehensive guide, we delve deep into the world of Kubernetes, focusing on the most crucial commands you need to effectively manage Kubernetes clusters. From basic node management to advanced deployment strategies, our article serves as both a learning resource for beginners and a quick reference for experienced professionals.

Whether you are troubleshooting a live system, scaling your services, or ensuring the seamless rollout of applications, the command-line interface (CLI) of Kubernetes – kubectl – is your go-to tool. Each command, with its unique functionality and purpose, plays a pivotal role in the broader context of Kubernetes cluster management. Join us as we explore these commands, unravel their syntax, and demonstrate their real-world applications. Our goal is to equip you with the knowledge and confidence to navigate the Kubernetes ecosystem, ensuring that your containerized applications run smoothly and efficiently.

kubectl get nodes

kubectl get nodes

The kubectl get nodes command is used to list all nodes in a Kubernetes cluster. Each node represents a server in the cluster. This command provides a summary of each node, including its name, status (whether it's ready, not ready, or in a maintenance state), the time it has been up (age), and the version of Kubernetes running on the node.

Example Use in a Real System

Imagine you're managing a Kubernetes cluster that hosts a web application. Your application suddenly experiences performance issues, and you need to quickly check the health and status of your cluster's underlying hardware.

By running kubectl get nodes, you can immediately see if any nodes are in a NotReady state, which might indicate hardware failures, connectivity issues, or other problems preventing the node from functioning correctly. This command helps you quickly identify and address such issues, ensuring minimal downtime for your application.

kubectl describe node

kubectl describe node "node_name"

The kubectl describe node [node-name] command provides detailed information about a specific node in a Kubernetes cluster. This command outputs a wealth of data, including the node's labels, annotations, conditions (like memory pressure, disk pressure, and readiness), addresses (like internal IP), capacity (such as CPU and memory resources), and the status of the pods running on the node. It's an invaluable tool for diagnosing issues and understanding the state of a node in depth.

Example Use in a Real System:

Suppose you're administering a Kubernetes cluster that's running several critical applications. You notice that one of the nodes is showing irregular resource usage. To investigate further, you use kubectl describe node to get detailed information about this particular node.

By running kubectl describe node [node-name], you can see a comprehensive view of the node's status, including its resource allocation (CPU, memory), conditions that might be affecting its performance, and any events related to the node. This detailed view could reveal issues like high memory usage or network problems, helping you to pinpoint the cause of the irregularities and take appropriate corrective actions.

kubectl cordon

kubectl cordon "node_name"

The kubectl cordon [node-name] command is used in Kubernetes to mark a node as unschedulable. This means that while the existing pods on the node continue to run, the scheduler won't schedule any new pods onto that node. It's important to note that cordoning a node does not affect existing pods on the node; it only prevents new pods from being scheduled onto it.

Example Use in a Real System:

Consider a scenario where you need to perform maintenance on a server that's part of your Kubernetes cluster. This might involve hardware upgrades, software updates, or other tasks requiring the node to be stable and not take on additional workload.

Before starting the maintenance, you would use kubectl cordon [node-name] to ensure that no new Kubernetes workloads are assigned to that node. By doing this, you can safely perform the maintenance without impacting new deployments or services. Once the maintenance is complete, you can then use kubectl uncordon [node-name] to allow the node to start accepting new pods again.

kubectl uncordon

kubectl uncordon "node_name"

The kubectl uncordon [node-name] command reverses the effect of the cordon command on a Kubernetes node. It marks a previously cordoned node as schedulable again, allowing the Kubernetes scheduler to place new pods onto this node. This command does not affect existing pods on the node but ensures that the node can accept new workloads.

Example Use in a Real System:

Imagine you have a Kubernetes cluster running critical applications, and one of the nodes needed maintenance or an upgrade, for which you used kubectl cordon to prevent new pods from being scheduled on it. After completing the maintenance and ensuring the node is fully operational and ready to handle workloads again, you would use kubectl uncordon [node-name].

This action is crucial in a production environment where you need to balance and distribute workloads efficiently across all available resources. By uncording the node, you effectively reintegrate it into the cluster’s resources, ensuring optimal utilization and performance of your Kubernetes environment.

kubectl drain

kubectl drain "node_name"

The kubectl drain [node-name] command is used to prepare a node for maintenance or removal from a Kubernetes cluster. When this command is executed, it performs the following actions:

  1. Cordons the Node: It marks the node as unschedulable, preventing new pods from being scheduled onto it (similar to kubectl cordon).
  2. Evicts Pods: It safely evicts all pods from the node, respecting the PodDisruptionBudgets. This means that if a pod can be rescheduled onto another node (if the cluster allows), it will be. Some pods that cannot be moved due to specific constraints or policies will remain unless the --ignore-daemonsets and --delete-local-data flags are used.

This command is critical for ensuring that node maintenance (like software updates or hardware upgrades) does not disrupt the running applications.

Example Use in a Real System:

In a real-world scenario, suppose your Kubernetes cluster hosts an e-commerce application, and one of the nodes requires a hardware upgrade to increase its capacity. Before you can perform this upgrade, you need to ensure that the services running on this node are safely relocated to other nodes to maintain the availability and functionality of your application.

By using kubectl drain [node-name], you safely evacuate all pods from the target node to other nodes in the cluster. This ensures that your e-commerce application remains available to users, without any downtime or disruption, while the node is being upgraded.

kubectl label nodes

kubectl label nodes "node_name" "label_key"="label_value"

The kubectl label nodes [node-name] [label-key]=[label-value] command is used to add or update labels on a specific node in a Kubernetes cluster. Labels are key/value pairs that are attached to objects, such as nodes, and can be used for organizing and selecting subsets of these objects. They are particularly useful for applying policies, viewing stats, or even routing network traffic.

Example Use in a Real System:

Imagine you're managing a Kubernetes cluster that serves a multi-tier application. This application has different components, such as front-end, back-end, and database services, which you want to run on different types of nodes based on their resource requirements. For instance, database services might require nodes with more memory and storage.

You can use kubectl label nodes to assign labels to your nodes, like tier=frontend, tier=backend, and tier=database. Then, you can configure your pods to be scheduled on nodes with specific labels using node selectors. This way, your database services will always run on nodes optimized for storage and memory, while front-end services can run on nodes optimized for network performance.

kubectl taint nodes

kubectl taint nodes "node_name" "key=value:taint-effect"

The kubectl taint nodes [node-name] [key=value:taint-effect] command is used to apply a taint to a specific node in a Kubernetes cluster. Taints are a way to mark a node so that no pods will be scheduled onto it unless they explicitly tolerate the taint. This command is useful for ensuring that certain nodes are reserved for specific purposes or to temporarily prevent any pods from being scheduled on a node, for example, during maintenance.

A taint consists of a key, value, and effect. The effect can be one of three types:

  • NoSchedule: Pods that do not tolerate this taint are not scheduled on the node.
  • PreferNoSchedule: Kubernetes will try to avoid placing a pod that does not tolerate this taint on the node but it is not guaranteed.
  • NoExecute: Pods that do not tolerate this taint are evicted if they are already running on the node, and are not scheduled on it in the future.

Example Use in a Real System:

Consider a scenario where you have a Kubernetes cluster that includes a mix of high-performance nodes and regular nodes. You want to ensure that your most critical applications run exclusively on the high-performance nodes.

To achieve this, you can apply a taint to the high-performance nodes using kubectl taint nodes. For example, you could add a taint like performance=high:NoSchedule. This taint prevents regular pods from being scheduled on these nodes unless they have a corresponding tolerance. Your critical applications can be configured with a toleration for this taint, ensuring that they are the only ones scheduled on the high-performance nodes, thereby optimizing their performance.

kubectl top node

kubectl top node "node_name"

The kubectl top node [node-name] command displays the current CPU and memory usage for a specific node, or all nodes if no node name is specified, in a Kubernetes cluster. This command fetches metrics from a metrics server in the cluster and presents them in a concise format. It's a vital tool for monitoring resource usage and performance of nodes, helping administrators to make informed decisions about resource management, scaling, and troubleshooting.

Example Use in a Real System:

Suppose you're monitoring a Kubernetes cluster that runs various applications, including web servers, databases, and background processing jobs. You observe that the cluster's performance is fluctuating, and you need to quickly assess if any nodes are overutilized.

By executing kubectl top node, you can get an instant overview of each node's CPU and memory usage. If a particular node shows high resource utilization, it could indicate that the node is handling more load than it can manage efficiently. This insight allows you to take corrective actions, such as scaling up your cluster by adding more nodes, or redistributing workloads to balance the load more evenly across the existing nodes.

kubectl get pods -o wide

kubectl get pods -o wide

The command kubectl get pods -o wide is used to list all the pods in the current namespace of a Kubernetes cluster, providing additional details compared to the standard kubectl get pods. The -o wide option extends the output to include more information such as the node on which each pod is running, the pod's IP address, and the age of the pod. This command is essential for gaining a quick and detailed overview of the pods' status and their placement in the cluster.

Example Use in a Real System:

Imagine you're managing a Kubernetes cluster that hosts a multi-component application. Suddenly, you receive reports of intermittent application issues from users. To diagnose the problem, you need to quickly understand the distribution and status of your pods across the cluster.

By running kubectl get pods -o wide, you can see not only which pods are running, down, or in a state of transition, but also where they are running. This information helps in identifying if the issues are isolated to pods on a specific node, which could indicate a node-level problem, or if they are more widespread, suggesting an application-level issue. For instance, if you notice that all problematic pods are on the same node, you might investigate that node for resource constraints or connectivity issues.

kubectl annotate node

kubectl annotate node "node_name" "annotation"

The kubectl annotate node [node-name] [annotation] command is used to add or update annotations on a specific node in a Kubernetes cluster. Annotations are key/value pairs that attach metadata to Kubernetes objects. Unlike labels, annotations are not used to identify and select objects, but they can be used to store additional, non-identifying information that can be used by tools and libraries.

Example Use in a Real System:

Suppose you are managing a Kubernetes cluster that is used for both development and production workloads. You want to mark certain nodes with information regarding their maintenance schedules, special hardware details, or custom policies for cluster management tools.

By using kubectl annotate node, you can add this kind of metadata to your nodes. For example, you can annotate a node with its maintenance schedule or a flag indicating it's designated for development workloads only. This information can then be used by your automated scripts or cluster management tools to make decisions, like scheduling maintenance jobs during non-peak hours or ensuring that production workloads do not get scheduled on development nodes.

kubectl delete node

kubectl delete node "node_name"

The kubectl delete node [node-name] command is used to remove a node from a Kubernetes cluster. This action is typically taken when a node is decommissioned, becomes irreparably faulty, or needs to be replaced. It's important to note that this command does not physically shut down or delete the node itself from the infrastructure; it merely removes the node from Kubernetes' cluster management. Before running this command, it's crucial to ensure that any workloads running on the node are safely drained and relocated to other nodes in the cluster.

Example Use in a Real System:

Imagine you're managing a Kubernetes cluster, and one of your nodes experiences a critical hardware failure that cannot be resolved. To remove this node from your cluster, you would first evacuate any running pods from the node using kubectl drain [node-name]. This step ensures that the services and applications running on the faulty node are safely rescheduled onto other nodes in the cluster.

Once the node is drained, you can use kubectl delete node [node-name] to remove the node from the cluster. This action updates the cluster state, acknowledging that the node is no longer part of the cluster and should not be considered for scheduling future workloads.

kubectl edit node

kubectl edit node "node_name"

The kubectl edit node [node-name] command opens the configuration of a specific node in a Kubernetes cluster in an editor. This command allows you to directly modify the configuration of the node, which is represented in YAML format. This can include changes to labels, annotations, or other specific settings that are part of the node's configuration. It's important to use this command with caution, as improper changes can affect the node's behavior and the cluster's functionality.

Example Use in a Real System:

Suppose you're running a Kubernetes cluster and need to update the configuration of a node to add a custom label or modify its taints for better workload management. For instance, you might want to label a node with specific hardware capabilities like an SSD or GPU to ensure that certain workloads are scheduled on nodes with these resources.

Using kubectl edit node [node-name], you can directly edit the node's configuration. Upon saving and exiting the editor, Kubernetes will apply the changes. This ability is particularly useful for quickly applying configuration changes without having to write and apply a separate YAML file.

kubectl patch node

kubectl patch node "node_name"

The kubectl patch node [node-name] command is used to make quick and specific changes to the configuration of a node in a Kubernetes cluster. This command is particularly useful for updating certain aspects of a node's configuration without the need for a full edit. The patch-type can be one of several formats such as json, merge, or strategic, and the patch-content is the actual content of the update in the specified format.

Example Use in a Real System:

Imagine you have a Kubernetes cluster and want to update the labels or annotations of a node dynamically, based on changing requirements or to reflect a change in the node's role or capabilities. For instance, you might need to add a label to a node to indicate that it now has SSD storage.

Using kubectl patch node, you can quickly apply this change. For example, you could use a command like kubectl patch node [node-name] --type='merge' --patch '{"metadata": {"labels": {"storage": "ssd"}}}'. This command would update the node's labels to include storage: ssd, which could then be used by the scheduler to place appropriate workloads on this node.

kubectl replace -f

kubectl replace -f "node_name.yaml"

The kubectl replace -f [file.yaml] command is used to replace a Kubernetes resource with a new definition specified in a YAML or JSON file. This command completely replaces the existing resource specification with the new one provided in the file. It's commonly used when you need to update resources like deployments, services, or pods with a new configuration. Unlike kubectl apply, which merges changes, kubectl replace will overwrite the existing specification.

Example Use in a Real System:

Consider a situation where you have a deployment in your Kubernetes cluster that needs significant updates, including changes to environment variables, resource limits, or the image version. You have prepared a new deployment specification with all these changes in a YAML file.

By using kubectl replace -f [file.yaml], you can quickly update the deployment to the new configuration. This is particularly useful for cases where changes are too extensive for a simple kubectl edit or when you want to ensure that the resource configuration matches exactly what is defined in the YAML file. It ensures that the deployment in the cluster reflects precisely the state defined in the file, without any remnants of the previous configuration.

kubectl get node --selector=[label-query]

kubectl describe node "node_name"

The kubectl get node --selector=[label-query] command is used to list nodes in a Kubernetes cluster that match specific label criteria. The --selector (or -l for short) flag allows you to filter nodes based on their labels. Labels are key-value pairs associated with Kubernetes objects and are used for organizing and selecting subsets of objects. This command is particularly useful for identifying nodes with certain characteristics, like nodes in a specific zone, nodes with certain hardware features, or nodes designated for particular workloads.

Example Use in a Real System:

Imagine you are managing a Kubernetes cluster with nodes spread across multiple geographic regions or zones. You've labeled each node with its respective region, for example, region=us-west, region=us-east, region=eu-central, etc. Now, suppose you need to quickly identify all nodes in the us-west region for a specific maintenance update or to evaluate the capacity and health of nodes in that region.

By executing kubectl get node --selector=region=us-west, you can instantly list all nodes that are labeled as part of the us-west region. This targeted selection helps in efficiently managing and interacting with subsets of nodes based on your operational needs, like performing region-specific updates or analyzing resource utilization in that particular region.

kubectl autoscale

kubectl autoscale rs "replicaset_name" --min="min_pods" --max="max_pods"

The kubectl autoscale command is used to automatically scale the number of pods in a replication controller, deployment, replica set, or stateful set. This command creates a HorizontalPodAutoscaler resource that automatically scales the number of pod replicas based on observed CPU utilization or other select metrics. The --min and --max flags specify the minimum and maximum number of pods that the autoscaler can set, and the optional --cpu-percent flag sets the target CPU utilization percentage that triggers the scaling.

Example Use in a Real System:

Consider you are managing a Kubernetes cluster that runs a web application. The traffic to this application varies significantly, with peak usage during certain hours or days. To ensure that your application can handle this variable load while also being cost-effective, you need to automatically scale the application based on demand.

By using kubectl autoscale, you can set up auto-scaling for your web application's deployment. For example, you can configure it to maintain between 3 to 10 replicas of the pods, scaling up when CPU usage exceeds 70%. This setup ensures that during high-traffic periods, your application scales up to handle the load, and during quieter periods, it scales down to reduce resource usage and costs.

kubectl rollout status daemonset

kubectl rollout status daemonset "daemonset_name"

The kubectl rollout status daemonset [daemonset-name] command is used to check the current rollout status of a daemonset in a Kubernetes cluster. A daemonset ensures that all (or some) nodes run a copy of a pod. When you update a daemonset, Kubernetes rolls out the changes to the pods, and this command allows you to monitor the progress of that rollout. It provides information about how the update is being applied across the nodes, such as how many pods have been updated to the new version and if the rollout is successful or not.

Example Use in a Real System:

Imagine you are managing a Kubernetes cluster that uses a daemonset to deploy a logging agent on every node in the cluster. You need to update this logging agent to a new version, which requires updating the daemonset.

After updating the daemonset, you can use kubectl rollout status daemonset [daemonset-name] to monitor the progress of the update. This command helps you track how the new version of the logging agent is being deployed across the nodes in the cluster. It's crucial for ensuring that the update is proceeding as expected and for identifying any issues that might arise during the rollout, such as pods not updating correctly or failing to start.

kubectl get services -o wide

kubectl get services -o wide

The kubectl get services -o wide command is used to list all services in the current namespace of a Kubernetes cluster, providing additional details compared to the standard kubectl get services. Services in Kubernetes are an abstract way to expose an application running on a set of Pods as a network service. The -o wide option extends the output to include more information, such as the type of service (e.g., ClusterIP, NodePort, LoadBalancer), the external IP (if applicable), and the ports exposed by the service.

Example Use in a Real System:

Suppose you're operating a Kubernetes cluster that hosts a variety of applications, including web services, APIs, and internal back-end services. To ensure efficient network traffic management and to troubleshoot connectivity issues, you need a clear view of how these services are exposed and accessed within the cluster.

By executing kubectl get services -o wide, you can quickly see a detailed overview of all services, including what ports they are listening on and how they are exposed (internally within the cluster or externally). This information is crucial when configuring ingress resources, setting up network policies, or diagnosing service accessibility issues. For instance, if an external application is unable to connect to one of your services, checking the service type and external IP can be a first step in troubleshooting the issue.

kubectl proxy

kubectl proxy

The kubectl proxy command sets up a proxy server between your local machine and the Kubernetes API server. This proxy provides a way to communicate with the API server without exposing it directly to the outside world or dealing with authentication in a manual way. The command starts a local HTTP server that forwards all requests to the Kubernetes API server. This is particularly useful for accessing the Kubernetes API, exploring cluster state and resources, or for debugging purposes.

Example Use in a Real System:

Suppose you're developing or debugging applications that interact with Kubernetes, and you need to test API calls or access the Kubernetes dashboard without exposing the API server or configuring complex authentication. You can use kubectl proxy to create a secure and straightforward connection to the Kubernetes API.

By running kubectl proxy, it starts a server on your local machine, and you can access the Kubernetes API via localhost on the port where the proxy is running. For instance, you can access the Kubernetes dashboard through this proxy, make API calls from your local scripts, or use it to interact with cluster resources during development and testing without setting up additional authentication mechanisms.

kubectl exec

kubectl exec "pod_name" -- "command"

The kubectl exec [pod-name] -- [command] command is used to execute a command inside a container in a specific pod of a Kubernetes cluster. This is an incredibly useful tool for debugging and interacting with the containers in your cluster. It allows you to run commands in a container just as if you were logged into it, which is essential for tasks like checking configurations, viewing logs, exploring issues, or running interactive shells.

Example Use in a Real System:

Imagine you have a Kubernetes cluster running a web application, and one of the pods is reporting errors. You need to quickly diagnose the issue by checking the application logs or the current configuration within the container.

By using kubectl exec, you can directly run commands inside the problematic container. For example, if you want to view the contents of a log file inside a container, you can execute a command like kubectl exec [pod-name] -- cat /var/log/app.log. This allows you to instantly view the log file's output, helping you to diagnose and address the issue more quickly.

Conclusion on Kubernetes Node Commands on kubectl

As we conclude our journey through the intricate maze of Kubernetes commands, it's evident that the power and flexibility of Kubernetes as a container orchestration tool lie in its comprehensive command-line interface. From the simplicity of kubectl get pods to the complexity of managing deployments and resources with kubectl apply or kubectl exec, each command offers a unique lever to precisely control and optimize your Kubernetes environment.

In mastering these commands, you have not just acquainted yourself with the basic building blocks of Kubernetes but have also embraced the deeper nuances that make it such a robust and scalable platform. The real-world applications and examples discussed serve as a testament to the versatility and efficacy of Kubernetes in addressing diverse operational challenges in container management.

Remember, the journey in Kubernetes is one of continuous learning and adaptation. The landscape of technology, especially in the realm of container orchestration, is ever-evolving. Staying updated with the latest developments, experimenting with new features, and continuously refining your approach will ensure that you remain at the forefront of this dynamic field.

We hope this guide has illuminated the path for your Kubernetes endeavors, empowering you to deploy, manage, and scale applications with greater confidence and proficiency. Whether you're a developer, a system administrator, or an IT professional, the knowledge of these key Kubernetes commands is a valuable asset in your toolkit, paving the way for seamless, efficient, and effective container orchestration.