Kubernetes is an interesting and growing world that receives great community contributions to make its management more effortless. Although you still have the option of self-managing your entire cluster, there are solutions that speed up your cluster's bootstrap. For example, each major cloud provider offers a managed Kubernetes cluster service to reduce the burden of maintaining the whole cluster by yourself, which is definitely a full-time job.
I want to focus on the AWS offering, Amazon EKS, which is the managed service that removes the administration of the control plane of the cluster and also provides tooling to easily command and operate Kubernetes tooling like the DNS provider you chose (like CoreDNS, KubeProxy, a CNI plugin, etc. It also offers integrations for monitoring, logging, upgrades, and more. It is very convenient to start working in your cluster in minutes.
That is the pretty side of the coin, but Amazon EKS is still Kubernetes, and although it takes care of some complexity, there are other challenges that you'll have to deal with. One of them is the manual management of the EKS resource, worker nodes, upgrades, and managing the overall state of your cluster. This is also part of the AWS Shared Responsibility Model. For example, you still need to deal with how to make your clusters repeatable to create, for having different similar environments, or for disaster recovery scenarios. The good thing is that, as I said before, the community (and even AWS themselves) have worked on different open-source solutions that relieve these concerns.
Now, let me show you something nice!
20 minutes later…
Your cluster is up and running ready for receiving your further commands!
Introducing EKSCTL, which is the “Official Amazon EKS CLIˮ for managing your cluster from the command line and even from YAML files in a declarative way to enable Infrastructure as Code.
Getting started with EKSCTL
The installation of EKSCTL will depend on the operating system you are working on. I'll let you know the official page where the installation steps are defined: https://eksctl.io/installation/. Once installed, you can have the same features regarding which operating system you are using.
Once installed, the other requirement is to have the AWS CLI configured with your credentials, which will need to have access to EKS, EC2, CloudFormation, EC2 AutoScaling, IAM, and Systems Manager. Depending on other features, you probably will need more permissions. For more information about how to configure your AWS CLI, please follow this:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html.
EKSCTL offers the following features
- Create, get, list, and delete clusters
- Create, drain, and delete nodegroups
- Scale a nodegroup
- Update a cluster
- Use custom AMIs
- Configure VPC Networking
- Configure access to API endpoints
- Support for GPU nodegroups
- Spot instances and mixed instances
- IAM Management and Add-on Policies
- List cluster CloudFormation stacks
- Install CoreDNS
- Build a KUBECONFIG file for a cluster
And is good to know that EKSCTL can manage clusters there were created or not with it. This is very convenient if you have clusters created manually, with the AWS SDK, with CloudFormation, or any other method. For these clusters, alongside with the AWS CLI configured, youʼll need the KUBECONFIG file to access that cluster.
Command line or YAML file
With EKSCTL, you can interact with EKS clusters fully on the command line, but you can also specify YAML files (just like Kubernetes declaration files) with all the configurations of your cluster. This is very convenient because you can add a lot of customization to the YAML file without having to add multiple parameters to it. For example, to create a cluster with worker nodes using the CLI, you have to run something like the following command:
And, if you want to add more customization, then youʼll end up having a larger command to run and maintain.
Instead, you can opt for the YAML version, which will look something like this:
# cluster.yaml
Then, you can use the following command with all of these configurations:
With this, I am not saying the best way to do things is always through YAML files. It is definitely convenient, but there are quick staffthat willbenefit from the CLI commands.
Updates to existing cluster
Continuing from our YAML example from before, after you create the cluster, you can do changes to the cluster.yaml file and then apply it. EKSCTL will identify what are the differences and apply those to your cluster. For example, letʼs say we want to change the desiredCapacity to 15, we can submit the same file but only that property changed and itʼll perform the update for us:
# cluster.yaml
After we save the content of our new file, we should run the command:
#Note the word "update" instead of "create"
EKSCTL uses CloudFormation behind the scenes to manage everything it creates, from the EKS cluster itself, to the worker nodes, IAM roles, policies, etc.
Main Features
EKSCTL is way more capable than simply creating, updating, and deleting clusters, it also offers a lot more inside Kubernetes itself, configurations at the AWS level, networking, and more. Letʼs dive into some of those to get the idea.
Manage Node Groups
First letʼs start by saying that there can be different type of node groups in EKS, the managed node groups which are visualized in the EKS console and AWS will manage the provisioning and registration of the EC2 instances into the cluster, and the unmanaged node groups (also known as self-managed) where you control how the worker nodes are connected to the EKS cluster.
Both types are supported by EKSCTL under the property nodeGroups or managedNodeGroups . Both receive pretty much the same fields. For example, letʼs take
a look the following YAML snippet with some main properties we can set for our nodes:
As we can see, some fields are very clear what they mean.
- minSize and maxSize offer a way to ensure how big or how small the AutoScaling group that manages the node groups can scale up or down.
- volumeSize is the amount of storage in GBs the hard drives of the nodes will have
- labels are Kubernetes labels which are usefuly in the Kubernetes world to use with taints, tolerations, affinity, etc.
- tags which are the AWS tags attached to the AutoScaling group that will propagate to the actual nodes.
- ssh (and ssh.allow) which defines the configuration about whether the access via SSH is permitted or not.
The list goes on to the point that this object can represent everything you can possible update in the node groups in your EKS cluster.
Upgrade Kubernetes Version
Kubernetes versions are moving super fast, so EKS. They launched EKS back in 2018 and I personally started using it back then. It has been quite a lot of updates until today and having EKSCTL helping me on that is an amazing advantage. Let me explain.
There are three main items while updating a Kubernetes cluster in general:
- The cluster itself
- The worker nodes
- The internal Kubernetes tooling (like KubeProxy, the CNI plugin, etc), also known as add-ons in EKS.
Upgrade Control Plane
EKSCTL performs updates for each of these with single lines. For example, letʼs say we have our EKS cluster in version 1.27 and we want to update to the next one. It has commands and YAML configurations to perform each of these.
For the Cluster, we have the command:
Or we can use the YAML file as well for this:
# cluster.yaml
upgrade based on the configuration defined in the file. This process takes around 1520 minutes to complete and it does it without any type of downtime.
Upgrade Node Groups
Next in our upgrade list, we have the worker nodes. As expected, it also supports both CLI and YAML configurations. For example, each nodegroup will have its own name, so you could check what are the nodegroups a cluster has and upgrade it with the following command:
You can also do this using the YAML file, but is a little different. The new nodegroups will inheret the version of the cluster specified in the metadata field of the file, like this:
...
# cluster.yaml
In this case, the update will create a new nodeGroup with the version of the cluster, which is 1.28.
Note: this could cause downtime to applications while they are removed, Kuberentes tries to have living and breathing pods of your workloads, but there are more factors tied to this that you need to keep an eye on it. This applies to both the CLI command and the YAML update.
Upgrade Add-ons
And finally, we have the add-ons. By default, EKS comes with three add-ons, kube-proxy, AWS VPC CNI Plugin, and CoreDNS. These three are easy to manage through the command line:
#This is the CNI Plugin
This will know which is the correct version of the add-on to be compatible with the new version of the cluster and itʼll do a rollout of the pods so it doesnʼt affect functionality of the cluster.
Advanced features
This is just the beginning for EKSCTL, so I wanted to discuss some other fields that are worth mentioning as benefits or features for this tool
IAM Roles for Service accounts: This is a native way EKS offers to allow Pods running in the Kubernetes cluster to securely assume IAM roles using the AWS SDK and perform actions against the AWS API. For example, with this features, we can give permissions to consume an S3 bucket or an SQS queue, for example without installing 3rd-party tooling or having to maintain anything. This can be set using EKSCTL to enable the cluster to use an IAM OIDC provider.
GitOps: With the help of Flux, EKSCTL enables GitOps to your cluster by allowing us to setup configuration for this in a gitops.flux object inside the YAML definition. This allow Kubernetes YAML manifests stored in a Git repository to be applied dynamically to your cluster, which is convenient for applications that you want to apply automatically to multiple clusters.
Networking: EKSCTL offers you the ability to create and maintain your cluster and worker nodes within your VPC and subnets. For example, you can setup which subnets you want specific worker nodes to use, or just the whole cluster. Also, you can determine which security groups itʼll use so new ones are not created if you already have configured for some other use case. You could also setup make your cluster fully private, by making its access only possible from a specific CIDR block, for example, and protect your workload as much as possible. The list is endless as VPC and networking setups are a whole world.
EKS Anywhere: This is a flavor of EKS clusters where the machines are not necessary in AWS. This enables you to have worker nodes on-premise or on other clouds, for example. To manage these from EKS console and take advantage of some of the EKS benefits, you can definitely use EKSCTL as it supports management of those. The root command for this will be:
Delete Cluster
And of course, you can delete the cluster which, as expected, is not an easy task if you try to do this manually. The main reason is because EKS works with EC2 instances, AutoScaling groups, IAM roles, security groups, etc. Thankfully, EKSCTL wraps all of these tasks under a simple (but powerful) command:
This will remove everything in order using the same CloudFormation stacks it used for creating the whole thing.
Conclusion
Today, EKSCTL is the de-facto tool for managing EKS clusters, which is basically supported in the open source world by the community and AWS employees. We just scratched the surface in this 10,000 feet review of EKSCTL. Definitely practicing around it and playing with the tool will drive to the mastery.