Deploy a full AWS EKS cluster with Terraform
- VPC
- Internet Gateway (IGW)
- Public and Private Subnets
- Security Groups, Route Tables and Route Table Associations
- IAM roles, instance profiles and policies
- An EKS Cluster
- EKS Managed Node group
- Autoscaling group and Launch Configuration
- Worker Nodes in a private Subnet
- bastion host for ssh access to the VPC
- The ConfigMap required to register Nodes with EKS
- KUBECONFIG file to authenticate kubectl using the
aws eks get-token
command. needs awscli version1.16.156 >
You can configure you config with the following input variables:
Name | Description | Default |
---|---|---|
cluster-name |
The name of your EKS Cluster | eks-cluster |
aws-region |
The AWS Region to deploy EKS | us-east-1 |
availability-zones |
AWS Availability Zones | ["us-east-1a", "us-east-1b", "us-east-1c"] |
k8s-version |
The desired K8s version to launch | 1.13 |
node-instance-type |
Worker Node EC2 instance type | m4.large |
root-block-size |
Size of the root EBS block device | 20 |
desired-capacity |
Autoscaling Desired node capacity | 2 |
max-size |
Autoscaling Maximum node capacity | 5 |
min-size |
Autoscaling Minimum node capacity | 1 |
vpc-subnet-cidr |
Subnet CIDR | 10.0.0.0/16 |
private-subnet-cidr |
Private Subnet CIDR | ["10.0.0.0/19", "10.0.32.0/19", "10.0.64.0/19"] |
public-subnet-cidr |
Public Subnet CIDR | ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"] |
db-subnet-cidr |
DB/Spare Subnet CIDR | ["10.0.192.0/21", "10.0.200.0/21", "10.0.208.0/21"] |
eks-cw-logging |
EKS Logging Components | ["api", "audit", "authenticator", "controllerManager", "scheduler"] |
ec2-key-public-key |
EC2 Key Pair for bastion and nodes | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected] |
You can create a file called terraform.tfvars or copy variables.tf into the project root, if you would like to over-ride the defaults.
NOTE on versions The versions of this module are compatible with the following Terraform releases. Please use the correct version for your use case:
version = 3.0.0 >
with terraform0.13.x >
version = 2.0.0
with terraform< 0.12.x
version = 1.0.4
with terraform< 0.11.x
Have a look at the examples for complete references You can use this module from the Terraform registry as a remote source:
module "eks" {
source = "WesleyCharlesBlake/eks/aws"
aws-region = "us-east-1"
availability-zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
cluster-name = "my-cluster"
k8s-version = "1.17"
node-instance-type = "t3.medium"
root-block-size = "40"
desired-capacity = "3"
max-size = "5"
min-size = "1"
vpc-subnet-cidr = "10.0.0.0/16"
private-subnet-cidr = ["10.0.0.0/19", "10.0.32.0/19", "10.0.64.0/19"]
public-subnet-cidr = ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"]
db-subnet-cidr = ["10.0.192.0/21", "10.0.200.0/21", "10.0.208.0/21"]
eks-cw-logging = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
ec2-key-public-key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected]"
}
output "kubeconfig" {
value = module.eks.kubeconfig
}
output "config-map" {
value = module.eks.config-map-aws-auth
}
Or by using variables.tf or a tfvars file:
module "eks" {
source = "WesleyCharlesBlake/eks/aws"
aws-region = var.aws-region
availability-zones = var.availability-zones
cluster-name = var.cluster-name
k8s-version = var.k8s-version
node-instance-type = var.node-instance-type
root-block-size = var.root-block-size
desired-capacity = var.desired-capacity
max-size = var.max-size
min-size = var.min-size
vpc-subnet-cidr = var.vpc-subnet-cidr
private-subnet-cidr = var.private-subnet-cidr
public-subnet-cidr = var.public-subnet-cidr
db-subnet-cidr = var.db-subnet-cidr
eks-cw-logging = var.eks-cw-logging
ec2-key-public-key = var.ec2-key
}
The AWS credentials must be associated with a user having at least the following AWS managed IAM policies
- IAMFullAccess
- AutoScalingFullAccess
- AmazonEKSClusterPolicy
- AmazonEKSWorkerNodePolicy
- AmazonVPCFullAccess
- AmazonEKSServicePolicy
- AmazonEKS_CNI_Policy
- AmazonEC2FullAccess
In addition, you will need to create the following managed policies
EKS
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:*"
],
"Resource": "*"
}
]
}
You need to run the following commands to create the resources with Terraform:
terraform init
terraform plan
terraform apply
TIP: you should save the plan state
terraform plan -out eks-state
or even better yet, setup remote storage for Terraform state. You can store state in an S3 backend, with locking via DynamoDB
Setup your KUBECONFIG
terraform output kubeconfig > ~/.kube/eks-cluster
export KUBECONFIG=~/.kube/eks-cluster
Initially, only the system that deployed the cluster will be able to access the cluster. To authorize other users for accessing the cluster, aws-auth
config needs to be modified by using the steps given below:
- Open the aws-auth file in the edit mode on the machine that has been used to deploy EKS cluster:
sudo kubectl edit -n kube-system configmap/aws-auth
- Add the following configuration in that file by changing the placeholders:
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/<username>
username: <username>
groups:
- system:masters
So, the final configuration would look like this:
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/<username>
username: <username>
groups:
- system:masters
- Once the user map is added in the configuration we need to create cluster role binding for that user:
kubectl create clusterrolebinding ops-user-cluster-admin-binding-<username> --clusterrole=cluster-admin --user=<username>
Replace the placeholder with proper values
You can destroy this cluster entirely by running:
terraform plan -destroy
terraform destroy --force