Skip to content
This repository has been archived by the owner on Dec 17, 2024. It is now read-only.

Commit

Permalink
Merge pull request #10 from aws-samples/2.0.0
Browse files Browse the repository at this point in the history
2.0.0
  • Loading branch information
couchgott authored Oct 4, 2022
2 parents b22a548 + a121840 commit 7a9beda
Show file tree
Hide file tree
Showing 76 changed files with 300 additions and 28,763 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
secrets/*
vars/eksexample_*
vars/*/eksexample_*
vars/static/custom_definitions.yaml
ansible.log
.vscode
.DS_Store
50 changes: 16 additions & 34 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The Deployment consists of one main playbook triggering multible tasks, cloudfor
- eks-cluster-autoscaler.task.yaml: setup of the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
- eks-container-instights.task.yaml: enable [container insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html) for the Amazon EKS cluster
- eks-external-dns.task.yaml: setup of the Route53 automation via [external-dns](https://github.com/kubernetes-sigs/external-dns)
- eks-ingress-controller.task.yaml: setup of the [alb-ingress-controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) to automate service exposure
- eks-ingress-controller.task.yaml: setup of the [aws-load-balancer-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller) to automate service exposure
- eks-metrics-server.task.yaml: setup of the metrics server used by the [Horizontal Pod Autoscaler](https://kubernetes.io/de/docs/tasks/run-application/horizontal-pod-autoscale/)
- eks-storage-provider-ebscsi.task.yaml: setup of the [Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver). The driver will ensure automatic provisioning of persistent block storage volumes for workloads
- eks-storage-provider-efscsi.task.yaml: setup of the [Amazon EFS CSI driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver). The driver will ensure automatic provisioning of persistent shared storage volumes for workloads
Expand All @@ -105,7 +105,7 @@ The Deployment consists of one main playbook triggering multible tasks, cloudfor
- eks-cluster-autoscaler-iam.template.yaml: provisioning of the IAM Policy granting access for the cluster autoscaler to Amazon EC2 and EC2 Autoscaling groups.
- eks-container-insights-iam.template.yaml: provisioning of the IAM Policy allowing Amazon Cloudwatch Access via the Worker Nodes
- eks-external-dns-iam.template.yaml: provisioning of the IAM Policy granting access for the external-dns pods to Route53
- eks-ingress-controller-iam.template.yaml: provisioning of the IAM Policy granting access for the alb-ingress-controller towards Elastic Load Balancing
- eks-ingress-controller-iam.template.yaml: provisioning of the IAM Policy granting access for the aws-load-balancer-controller towards Elastic Load Balancing
- eks-storage-provider-ebscsi-iam.template.yaml: IAM Policies to Allow EBS Access via the CSI Driver Deployment
- eks-storage-provider-efscsi-storage.template.yaml: provisioning of the EFS FileSystem, Mountpoints and related Securitygroups

Expand Down Expand Up @@ -146,7 +146,7 @@ kubectl get pod -o=wide -n kube-system
which should show something like:
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
alb-ingress-controller-7568799df8-pnch4 1/1 Running 0 7m30s 192.168.45.158 ip-192-168-43-140.eu-central-1.compute.internal <none> <none>
aws-load-balancer-controller-7568799df8-pnch4 1/1 Running 0 7m30s 192.168.45.158 ip-192-168-43-140.eu-central-1.compute.internal <none> <none>
aws-node-7vg6h 1/1 Running 0 17m 192.168.93.136 ip-192-168-93-136.eu-central-1.compute.internal <none> <none>
aws-node-scl29 1/1 Running 0 17m 192.168.43.140 ip-192-168-43-140.eu-central-1.compute.internal <none> <none>
cluster-autoscaler-7884f5ff6d-k6vpw 1/1 Running 0 8m44s 192.168.68.113 ip-192-168-93-136.eu-central-1.compute.internal <none> <none>
Expand All @@ -172,23 +172,23 @@ if everything is cool it should look like:
AWS ALB Ingress controller
Release: v1.1.8
Build: git-ec387ad1
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
Repository: https://github.com/kubernetes-sigs/aws-load-balancer-controller.git
-------------------------------------------------------------------------------
W0813 14:32:48.050307 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0813 14:32:48.096117 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I0813 14:32:48.096518 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I0813 14:32:48.096622 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"=
I0813 14:32:48.096910 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I0813 14:32:48.096963 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"=
I0813 14:32:48.097188 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I0813 14:32:48.098011 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"machineID":"","systemUUID":"","bootID":"","kernelVersion":"","osImage":"","containerRuntimeVersion":"","kubeletVersion":"","kubeProxyVersion":"","operatingSystem":"","architecture":""}}}}
I0813 14:32:48.103658 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"containers":null},"status":{}}}
I0813 14:32:48.096117 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I0813 14:32:48.096518 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I0813 14:32:48.096622 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"=
I0813 14:32:48.096910 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I0813 14:32:48.096963 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"=
I0813 14:32:48.097188 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I0813 14:32:48.098011 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"machineID":"","systemUUID":"","bootID":"","kernelVersion":"","osImage":"","containerRuntimeVersion":"","kubeletVersion":"","kubeProxyVersion":"","operatingSystem":"","architecture":""}}}}
I0813 14:32:48.103658 1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="aws-load-balancer-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"containers":null},"status":{}}}
I0813 14:32:48.105447 1 leaderelection.go:205] attempting to acquire leader lease kube-system/ingress-controller-leader-alb...
I0813 14:32:48.119414 1 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-alb
I0813 14:32:48.119775 1 recorder.go:53] kubebuilder/manager/events "level"=1 "msg"="Normal" "message"="alb-ingress-controller-7568799df8-pnch4_dc09d9d6-dd71-11ea-a82f-6e94ec7ac6f2 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"ingress-controller-leader-alb","uid":"5e3275ce-3936-411a-9de5-4503c2223c8b","apiVersion":"v1","resourceVersion":"3156"} "reason"="LeaderElection"
I0813 14:32:48.222253 1 controller.go:134] kubebuilder/controller "level"=0 "msg"="Starting Controller" "controller"="alb-ingress-controller"
I0813 14:32:48.322547 1 controller.go:154] kubebuilder/controller "level"=0 "msg"="Starting workers" "controller"="alb-ingress-controller" "worker count"=1
I0813 14:32:48.119775 1 recorder.go:53] kubebuilder/manager/events "level"=1 "msg"="Normal" "message"="aws-load-balancer-controller-7568799df8-pnch4_dc09d9d6-dd71-11ea-a82f-6e94ec7ac6f2 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"ingress-controller-leader-alb","uid":"5e3275ce-3936-411a-9de5-4503c2223c8b","apiVersion":"v1","resourceVersion":"3156"} "reason"="LeaderElection"
I0813 14:32:48.222253 1 controller.go:134] kubebuilder/controller "level"=0 "msg"="Starting Controller" "controller"="aws-load-balancer-controller"
I0813 14:32:48.322547 1 controller.go:154] kubebuilder/controller "level"=0 "msg"="Starting workers" "controller"="aws-load-balancer-controller" "worker count"=1
```
if you replace *alb-ingress* with *external-dns* or *cluster-autoscaler* you can use the same command to get the logs of these extensions as well.

Expand Down Expand Up @@ -217,24 +217,6 @@ you also can use for instance curl as a load generator
watch -n 0.1 curl -v https://eksdemo.example.com
```

the example microservice utilize fargate for the frontend. if you want to check AWS Fargate is actually used. Check using the following command:
```
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n eksdemo
```

which should give you something like:
```
NODE NAME
ip-192-168-93-136.eu-central-1.compute.internal eksdemo-crystal-64779997f9-fp4s6
ip-192-168-43-140.eu-central-1.compute.internal eksdemo-crystal-64779997f9-rkxqk
fargate-ip-192-168-158-210.eu-central-1.compute.internal eksdemo-frontend-6fbb54ff4c-4kvh5
fargate-ip-192-168-118-143.eu-central-1.compute.internal eksdemo-frontend-6fbb54ff4c-8fck5
fargate-ip-192-168-134-116.eu-central-1.compute.internal eksdemo-frontend-6fbb54ff4c-dzjqq
fargate-ip-192-168-101-17.eu-central-1.compute.internal eksdemo-frontend-6fbb54ff4c-wnwtv
ip-192-168-43-140.eu-central-1.compute.internal eksdemo-nodejs-5b4b4889c8-kl5cp
ip-192-168-93-136.eu-central-1.compute.internal eksdemo-nodejs-5b4b4889c8-t6pc6
```

There are many other things to try out. Feel free to share your ideas :)

---
Expand All @@ -257,7 +239,7 @@ There are many other things to try out. Feel free to share your ideas :)
- [eksctl](https://github.com/weaveworks/eksctl)
- [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
- [external-dns](https://github.com/kubernetes-sigs/external-dns)
- [alb-ingress-controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller)
- [aws-load-balancer-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller)
- [Horizontal Pod Autoscaler](https://kubernetes.io/de/docs/tasks/run-application/horizontal-pod-autoscale/)
- [Amazon EFS CSI driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver)
- [Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)
Expand Down
1 change: 1 addition & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ stdout_callback = skippy
log_path = ./ansible.log
become = false
interpreter_python = auto
executable = /bin/bash

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ServerAliveInterval=10 -o IdentitiesOnly=yes
Expand Down
16 changes: 10 additions & 6 deletions cloudformation/eks-bastion.template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,24 +90,28 @@ Resources:

yum update -y
yum -y remove aws-cli
yum -y install sqlite telnet jq strace tree gcc glibc-static python python-pip gettext bash-completion

pip install -U awscli ansible botocore boto boto3 openshift
yum -y install sqlite telnet jq strace tree gcc glibc-static gettext bash-completion

pip3 install -U awscli ansible botocore boto boto3 openshift pyyaml
update-alternatives --install /usr/bin/python python /usr/bin/python3 1

# install helm on bastion
curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

# install ansible kubernetes community collection
ansible-galaxy collection install community.kubernetes
/usr/local/bin/ansible-galaxy collection install kubernetes.core

cd /tmp

# eksctl install
curl -sSL "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
mv /tmp/eksctl /usr/local/bin
chmod +x ./eksctl
mv ./eksctl /usr/bin/eksctl

# kubectl install
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
mv ./kubectl /usr/bin/kubectl

Outputs:
EKSBastionInstanceDNSName:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,30 @@ Resources:
Properties:
ManagedPolicyName: EKSloadbalancerControllerPolicy
PolicyDocument:
Version: 2012-10-17
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- iam:CreateServiceLinkedRole
Resource: "*"
Condition:
StringEquals:
iam:AWSServiceName: elasticloadbalancing.amazonaws.com
- Effect: Allow
Action:
- ec2:DescribeAccountAttributes
- ec2:DescribeAddresses
- ec2:DescribeAvailabilityZones
- ec2:DescribeInternetGateways
- ec2:DescribeVpcs
- ec2:DescribeVpcPeeringConnections
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeInstances
- ec2:DescribeNetworkInterfaces
- ec2:DescribeTags
- ec2:GetCoipPoolUsage
- ec2:DescribeCoipPools
- elasticloadbalancing:DescribeLoadBalancers
- elasticloadbalancing:DescribeLoadBalancerAttributes
- elasticloadbalancing:DescribeListeners
Expand Down Expand Up @@ -120,6 +130,15 @@ Resources:
'Null':
aws:RequestTag/elbv2.k8s.aws/cluster: 'true'
aws:ResourceTag/elbv2.k8s.aws/cluster: 'false'
- Effect: Allow
Action:
- elasticloadbalancing:AddTags
- elasticloadbalancing:RemoveTags
Resource:
- arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*
- arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*
- arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*
- arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*
- Effect: Allow
Action:
- elasticloadbalancing:ModifyLoadBalancerAttributes
Expand Down
18 changes: 0 additions & 18 deletions cloudformation/eks-storage-provider-ebscsi-iam.template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,22 +28,4 @@ Resources:
- ec2:DescribeTags
- ec2:DescribeVolumes
- ec2:DetachVolume
Resource: '*'

EKSStorageProviderEBSSnapshotPolicy:
Type: 'AWS::IAM::ManagedPolicy'
Properties:
ManagedPolicyName: EKSStorageProviderEBSSnapshotPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ec2:CreateSnapshot
- ec2:CreateTags
- ec2:DeleteSnapshot
- ec2:DeleteTags
- ec2:DescribeInstances
- ec2:DescribeSnapshots
- ec2:DescribeVolumes
Resource: '*'
36 changes: 36 additions & 0 deletions cloudformation/eks-storage-provider-efscsi-iam.template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#############################################################
## NOT FOR PRODUCTION USE. ##
## THE CONTENT OF THIS FILE IS FOR LEARNING PURPOSES ONLY ##
## created by David Surey, Amazon Web Services, 2020 ##
#############################################################

AWSTemplateFormatVersion: "2010-09-09"

Resources:
EKSStorageProviderEFSCsiPolicy:
Type: 'AWS::IAM::ManagedPolicy'
Properties:
ManagedPolicyName: EKSStorageProviderEFSCsiPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- elasticfilesystem:DescribeAccessPoints
- elasticfilesystem:DescribeFileSystems
- elasticfilesystem:DescribeMountTargets
- ec2:DescribeAvailabilityZones
Resource: "*"
- Effect: Allow
Action:
- elasticfilesystem:CreateAccessPoint
Resource: "*"
Condition:
StringLike:
aws:RequestTag/efs.csi.aws.com/cluster: 'true'
- Effect: Allow
Action: elasticfilesystem:DeleteAccessPoint
Resource: "*"
Condition:
StringEquals:
aws:ResourceTag/efs.csi.aws.com/cluster: 'true'
6 changes: 3 additions & 3 deletions docs/examples/deploy-examples.playbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
gather_facts: no

vars:
ansible_ssh_private_key_file: "../../secrets/id_rsa_eks"
ansible_ssh_private_key_file: "./secrets/id_rsa_eks"
ansible_user: ec2-user

tasks:
- name: check ansible version
when: (ansible_version.major == 2 and ansible_version.minor < 8 ) or (ansible_version.major < 2)
when: (ansible_version.major == 2 and ansible_version.minor < 10 ) or (ansible_version.major < 2)
run_once: yes
fail:
msg: Please use Ansible 2.8 or newer
msg: Please use Ansible 2.10 or newer

- name: import static var data
include_vars:
Expand Down
Loading

0 comments on commit 7a9beda

Please sign in to comment.