Skip to content

Commit

Permalink
Merge pull request miqdigital#10 from nandeeshb09/eks-upgrade-steps
Browse files Browse the repository at this point in the history
Eks upgrade steps
  • Loading branch information
ntantri authored Jan 22, 2021
2 parents 1fd5c23 + bc33f10 commit 8f52183
Show file tree
Hide file tree
Showing 4 changed files with 118 additions and 0 deletions.
35 changes: 35 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,41 @@ $ kubectl get no -w
**Note:-** You should be seeing nodes joining the cluster within less than minutes.

---
#### EKS cluster upgrade using new asg file in terraform
Create a new eks-worker-node-v1.tf file with different name and below changes you have to do for EKS cluster upgrade.
* Change the userdata name to new version(eks-worker-node-upgrade-v2.tf) and should not conflict with old one.
* Change the Launch configuration and autoscalling group name to new version and should not conflict with old one.
* Change the ami to which your going upgrade EKS version provided by AWS -- ##eks-worker-ami -- change to new version
* In the new worker node file(eks-worker-node-upgrade-v2.tf), we have updated extra arguments for dedicated node(taint).
* Once you apply new .tf file and the new nodes will spin up , post that move workloads to the new one and delete old nodes.
* Please reffer eks-worker-node-upgrade-v2.tf file to upgrade the EKS cluster for reference and below steps to upgrade the worker nodes.

### Once you create new file and apply changes and also change the eks master version in .tf file.

```
$ terraform apply
```
## Once changes have applied terraform files, it will show new nodes as well as old nodes with different version.

```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-10-0-87-98.ec2.inetenal Ready <none> 21d v1.12.7
ip-10-0-15-24.ec2.inetenal Ready <none> 21d v1.12.7
ip-10-0-23-100.ec2.inetenal Ready <none> 21d v1.13.7-eks-c57ff8
ip-10-0-14-23.ec2.inetenal Ready <none> 21d v1.13.7-eks-c57ff8
```
### The next step is update the kube-system components based on the versions compatibility and cordon the old nodes(should not schedule in the old nodes once you move the workloads)
```
$ kubectl cordon nodename (old nodes)
```
### Once you started draining the old nodes, the workload will move to the new node.

```
$ kubectl drain nodename (old nodes)
```
### Once the darining is completed for all the old nodes ,then delete the old nodes.

## Contribution
We are happy to accept the changes that you think can help the utilities grow.
Expand Down
80 changes: 80 additions & 0 deletions eks-worker-node-upgrade-v2.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encoding this
# information into the AutoScaling Launch Configuration.
# More information: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

## updated AMI support for /etc/eks/bootstrap.sh
#### User data for worker launch

locals {
eks-node-private-userdata-v2 = <<USERDATA
#!/bin/bash -xe
sudo /etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.eks-cluster.endpoint}' --b64-cluster-ca '${aws_eks_cluster.eks-cluster.certificate_authority.0.data}' '${var.cluster-name}' \
--kubelet-extra-args "--node-labels=app=name --register-with-taints=app=name:NoExecute --kube-reserved cpu=500m,memory=1Gi,ephemeral-storage=1Gi --system-reserved cpu=500m,memory=1Gi,ephemeral-storage=1Gi --eviction-hard memory.available<500Mi,nodefs.available<10%"
USERDATA
}

resource "aws_launch_configuration" "eks-private-lc-v2" {
iam_instance_profile = "${aws_iam_instance_profile.eks-node.name}"
image_id = "${var.eks-worker-ami}" ## update to th bew version of ami --visit https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html
instance_type = "${var.worker-node-instance_type}" # use instance variable
key_name = "${var.ssh_key_pair}"
name_prefix = "eks-private"
security_groups = ["${aws_security_group.eks-node.id}"]
user_data_base64 = "${base64encode(local.eks-node-private-userdata-v2)}"

root_block_device {
delete_on_termination = true
volume_size = "${var.volume_size}"
volume_type = "gp2"
}

lifecycle {
create_before_destroy = true
}
}

resource "aws_autoscaling_group" "eks-private-asg-v2" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.eks-private-lc-v2.id}"
max_size = 2
min_size = 1
name = "eks-private"
vpc_zone_identifier = ["${aws_subnet.eks-private.*.id}"]

tag {
key = "Name"
value = "eks-worker-private-node-v2"
propagate_at_launch = true
}

tag {
key = "kubernetes.io/cluster/${var.cluster-name}"
value = "owned"
propagate_at_launch = true
}

## Enable this when you use cluster autoscaler within cluster.
## https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md

# tag {
# key = "k8s.io/cluster-autoscaler/enabled"
# value = ""
# propagate_at_launch = true
# }
#
# tag {
# key = "k8s.io/cluster-autoscaler/${var.cluster-name}"
# value = ""
# propagate_at_launch = true
# }

}


# Adding EKS workers scaling policy for scale up/down
# Creating Cloudwatch alarms for both scale up/down
##If require you can use scale up/down policy or else cluster autoscaler will take of scalling the node.
File renamed without changes.
3 changes: 3 additions & 0 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ variable "cluster-name" {
variable "eks-worker-ami" {
description = "Please visit here - https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html and select your pre-baked AMI depending on the cluster version and the region you are planning to launch cluster into"
}
variable "volume_size" {
description = "Enter size of the volume"
}

# In eks worker node instance type directly affects the number of PODs can run on a Node. Choose wisely.
# https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
Expand Down

0 comments on commit 8f52183

Please sign in to comment.