Upgrade EKS from kubernetes 1.19 to 1.20

Abdennour Toumi
1 min readJul 15, 2021

--

I. Upgrade Control plane

i am using this terraform module following GitOps practices, so for me it’s about, replacing 2 attributes :

  • version: 14.0.0 -> 17.1.0 ( version of the terraform module)
  • cluster_versuin: 1.19 -> 1.20

Then, pipeline will do the remaining. I mean: terraform apply -auto-approve

If you are not using Terraform, you still have 3 methods to upgrade the control plane:

II. Upgrade Coredns

check compatibility matrix from here

Then upgrade. For me, it was:

kubectl set image --namespace kube-system deployment.apps/coredns coredns=602401143452.dkr.ecr.ap-southeast-1.amazonaws.com/eks/coredns:v1.8.3-eksbuild.1

III. Upgrade kube-proxy

check compatibility matrix from here

Then upgrade. For me, it was:

kubectl set image daemonset.apps/kube-proxy \-n kube-system \kube-proxy=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.20.4-eksbuild.2

IV. Cluster Autoscaler

You might need to upgrade it

V. Rolling update nodes

install eks-workers-rolling-update :

# download it
curl -O https://gist.githubusercontent.com/abdennour/cb7cf2927740a5bd9ecdb51f5a96af0f/raw/ab6a2407a9f21b5cdd75838cf887a7bf13864c80/eks-workers-rolling-update.sh
# make sure that aws CLI and kubectl are installed

configure

export AWS_PROFILE=... AWS_REGION=....
export KUBECONFIG=.........

use it like a BOSS

bash eks-workers-rolling-update.sh

Go sleep 😴 now! let the script above complete the job

--

--

Abdennour Toumi

Software engineer, Cloud Architect, 5/5 AWS|GCP|PSM Certified, Owner of kubernetes.tn