diff --git a/README.md b/README.md index d7953ef..6837917 100644 --- a/README.md +++ b/README.md @@ -101,6 +101,41 @@ $ kubectl get no -w **Note:-** You should be seeing nodes joining the cluster within less than minutes. --- +#### EKS cluster upgrade using new asg file in terraform +Create a new eks-worker-node-v1.tf file with different name and below changes you have to do for EKS cluster upgrade. +* Change the userdata name to new version(eks-worker-node-upgrade-v2.tf) and should not conflict with old one. +* Change the Launch configuration and autoscalling group name to new version and should not conflict with old one. +* Change the ami to which your going upgrade EKS version provided by AWS -- ##eks-worker-ami -- change to new version +* In the new worker node file(eks-worker-node-upgrade-v2.tf), we have updated extra arguments for dedicated node(taint). +* Once you apply new .tf file and the new nodes will spin up , post that move workloads to the new one and delete old nodes. +* Please reffer eks-worker-node-upgrade-v2.tf file to upgrade the EKS cluster for reference and below steps to upgrade the worker nodes. + +### Once you create new file and apply changes and also change the eks master version in .tf file. + +``` +$ terraform apply +``` +## Once changes have applied terraform files, it will show new nodes as well as old nodes with different version. + +``` +$ kubectl get no + NAME STATUS ROLES AGE VERSION + ip-10-0-87-98.ec2.inetenal Ready 21d v1.12.7 + ip-10-0-15-24.ec2.inetenal Ready 21d v1.12.7 + ip-10-0-23-100.ec2.inetenal Ready 21d v1.13.7-eks-c57ff8 + ip-10-0-14-23.ec2.inetenal Ready 21d v1.13.7-eks-c57ff8 +``` +### The next step is update the kube-system components based on the versions compatibility and cordon the old nodes(should not schedule in the old nodes once you move the workloads) +``` +$ kubectl cordon nodename (old nodes) +``` +### Once you started draining the old nodes, the workload will move to the new node. + +``` +$ kubectl drain nodename (old nodes) + +``` +### Once the darining is completed for all the old nodes ,then delete the old nodes. ## Contribution We are happy to accept the changes that you think can help the utilities grow. diff --git a/eks-worker-node-upgrade-v2.tf b/eks-worker-node-upgrade-v2.tf new file mode 100755 index 0000000..cf27bd2 --- /dev/null +++ b/eks-worker-node-upgrade-v2.tf @@ -0,0 +1,80 @@ +# EKS currently documents this required userdata for EKS worker nodes to +# properly configure Kubernetes applications on the EC2 instance. +# We utilize a Terraform local here to simplify Base64 encoding this +# information into the AutoScaling Launch Configuration. +# More information: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml + +## updated AMI support for /etc/eks/bootstrap.sh +#### User data for worker launch + +locals { + eks-node-private-userdata-v2 = <