This repository showcases the terraform template that will help you to create EKS cluster on AWS.
Note - Above architecture doesn't reflect all the components that are created by this template. However, it does give an idea about core infrastructure that will be created.
- Creates a new VPC with CIDR Block - 10.15.0.0/19 (i.e 8190 IPs in a VPC) in a region of your choice. Feel free to change it, values are
variables.tf
. - Creates 3 public & 3 private subnets with each size of 1024 IP addresses in each zones
- Creates security groups required for cluster and worker nodes.
- Creates recommened IAM service and EC2 roles required for EKS cluster.
- Creates Internet & NAT Gateway required for public and private communications.
- Routing Table and routes for public and private subnets.
Before you execute this template make sure following dependencies are met.
- Install terraform
- Configure AWS CLI - make sure you configure AWS CLI with admin previliges
- AWS iam authenticator - Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes.
$ git clone https://github.com/MediaIQ/terraform-aws-eks-cluster.git
$ cd terraform-aws-eks-cluster
$ git clone https://github.com/thelman/terraform-aws-eks-cluster.git
$ cd terraform-aws-eks-cluster
$ terraform init
The terraform plan command is used to create an execution plan. Always a good practice to run it before you apply it to see what all resources will be created.
This will ask you to specify cluster name
and worker node instance type.
IMPORTANT: I modify several file from the original and define default values for all vars, if you need change please see the file variables.tf
Note: In the case you use the kodekloud playground, don't have permission for auto scaling, and with the changes for default the configuration create three workers using ec2 instance
$ terraform plan
var.cluster-name
Enter eks cluster name - example like eks-demo, eks-dev etc
Enter a value: eks-demo
var.region
Enter region you want to create EKS cluster in
Enter a value: us-east-1
var.ssh_key_pair
Enter SSH keypair name that already exist in the account
Enter a value: eks-keypair
var.worker-node-instance_type
enter worker node instance type
Enter a value: t2.medium
$ terraform apply
$ aws eks --region <AWS-REGION> update-kubeconfig --name <CLUSTER-NAME>
Note:- If AWS CLI and AWS iam authenticator setup correctly, above command should setup kubeconfig file in ~/.kube/config in your system.
$ kubectl get svc
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m
Once cluster is verified succesfully, its time to create a configMap to add the worker nodes into the cluster. We have configured output
with this template which will produce the configMap file content that you paste in aws-auth.yaml
.
$ kubectl apply -f aws-auth.yaml
$ kubectl get no -w
Note:- You should be seeing nodes joining the cluster within less than minutes.
IMPORTANT: I changed the ext of eks-worker-node-upgrade-v2.tf to txt, because with the kodekloud permission is not usefull
Create a new eks-worker-node-v1.tf file with different name and below changes you have to do for EKS cluster upgrade.
- Change the userdata name to new version(eks-worker-node-upgrade-v2.tf) and should not conflict with old one.
- Change the Launch configuration and autoscalling group name to new version and should not conflict with old one.
- Change the ami to which your going upgrade EKS version provided by AWS -- ##eks-worker-ami -- change to new version
- In the new worker node file(eks-worker-node-upgrade-v2.tf), we have updated extra arguments for dedicated node(taint).
- Once you apply new .tf file and the new nodes will spin up , post that move workloads to the new one and delete old nodes.
- Please reffer eks-worker-node-upgrade-v2.tf file to upgrade the EKS cluster for reference and below steps to upgrade the worker nodes.
$ terraform apply
Once changes have applied terraform files, it will show new nodes as well as old nodes with different version.
$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-10-0-87-98.ec2.inetenal Ready <none> 21d v1.12.7
ip-10-0-15-24.ec2.inetenal Ready <none> 21d v1.12.7
ip-10-0-23-100.ec2.inetenal Ready <none> 21d v1.13.7-eks-c57ff8
ip-10-0-14-23.ec2.inetenal Ready <none> 21d v1.13.7-eks-c57ff8
The next step is update the kube-system components based on the versions compatibility and cordon the old nodes(should not schedule in the old nodes once you move the workloads)
$ kubectl cordon nodename (old nodes)
$ kubectl drain nodename (old nodes)
We are happy to accept the changes that you think can help the utilities grow.
Here are some things to note:
- Raise a ticket for any requirement
- Discuss the implementation requirement or bug fix with the team members
- Fork the repository and solve the issue in one single commit
- Raise a PR regarding the same issue and attach the required documentation or provide a more detailed overview of the changes