The developer at Mystique Unicorn are interested in building their application using event-driven architectural pattern to process streaming data. For those who are unfamiliar, An event-driven architecture uses events to trigger and communicate between decoupled services and is common in modern applications built with microservices. An event is a change in state, or an update, like an item being placed in a shopping cart on an e-commerce website.
In this application, Kubernetes has been chosen as the platform to host their application producing and consuming events. The producers and consumers are maintained by different teams. They would like to isolate the traffic and have the ability to allow only the necessary traffic. Can you help them achieve this?
By default, network traffic in a Kubernetes cluster can flow freely between pods and also leave the cluster network altogether. In an EKS cluster, because pods share their nodeβs EC2 security groups, the pods can make any network connection that the nodes can. Creating restrictions to allow only necessary service-to-service and cluster egress connections decreases the number of potential targets for malicious or misconfigured pods and limits their ability to exploit the cluster resources.
Kubernetes Network policies1 can be used to specify how groups of pods are allowed to communicate with each other and with external network endpoints. They can be thought of as the Kubernetes equivalent of a firewall. Each network policy specifies a list of allowed (ingress and egress) connections. When the network policy is created, all the pods that it applies to are allowed to make or accept the connections listed in it. In other words, a network policy is essentially a list of allowed connections β a connection to or from a pod is allowed if it is permitted by at least one of the network policies that apply to the pod. However remember that, if NO network policies apply to a pod, then ALL network connections to and from it are permitted.
Another thing to note is that network policies are namespaced resources and only affect the pods that belong to that namespace. You will need to use a network plugin that actually enforces network policies. Although Kubernetes always supports operations on the NetworkPolicy resource, simply creating the resource without a plugin will have no effect. Project Calico2, 3 is a network policy engine for Kubernetes. With Calico network policy enforcement, you can implement network segmentation and tenant isolation.
In this blog, I will show how to deploy a simple network policy allow and deny access to pods on Amazon EKS on using calico.
-
This demo, instructions, scripts and cloudformation template is designed to be run in
us-east-1
. With few modifications you can try it out in other regions as well(Not covered here).- π AWS CLI Installed & Configured - Get help here
- π AWS CDK Installed & Configured - Get help here
- π Python Packages, Change the below commands to suit your OS, the following is written for amzn linux 2
- Python3 -
yum install -y python3
- Python Pip -
yum install -y python-pip
- Virtualenv -
pip3 install virtualenv
- Python3 -
-
-
Get the application code
git clone https://github.com/miztiik/eks-security-with-network-policies cd eks-security-with-network-policies
-
-
We will use
cdk
to make our deployments easier. Lets go ahead and install the necessary components.# You should have npm pre-installed # If you DONT have cdk installed npm install -g aws-cdk # Make sure you in root directory python3 -m venv .venv source .venv/bin/activate pip3 install -r requirements.txt
The very first time you deploy an AWS CDK app into an environment (account/region), youβll need to install a
bootstrap stack
, Otherwise just go ahead and deploy usingcdk deploy
.cdk bootstrap cdk ls # Follow on screen prompts
You should see an output of the available stacks,
eks-cluster-vpc-stack eks-cluster-stack ssm-agent-installer-daemonset-stack
-
Let us walk through each of the stacks,
-
Stack: eks-cluster-vpc-stack To host our EKS cluster we need a custom VPC. This stack will build a multi-az VPC with the following attributes,
- VPC:
- 2-AZ Subnets with Public, Private and Isolated Subnets.
- 1 NAT GW for internet access from private subnets
Initiate the deployment with the following command,
cdk deploy eks-cluster-vpc-stack
After successfully deploying the stack, Check the
Outputs
section of the stack. - VPC:
-
Stack: eks-cluster-stack As we are starting out a new cluster, we will use most default. No logging is configured or any add-ons. The cluster will have the following attributes,
- The control pane is launched with public access. i.e the cluster can be access without a bastion host
c_admin
IAM role added to aws-auth configMap to administer the cluster from CLI.- One OnDemand managed EC2 node group created from a launch template
- It create two
t3.medium
instances runningAmazon Linux 2
- Auto-scaling Group with
2
desired instances. - The nodes will have a node role attached to them with
AmazonSSMManagedInstanceCore
permissions - Kubernetes label
app:miztiik_on_demand_ng
- It create two
The EKS cluster will be created in the custom VPC created earlier. Initiate the deployment with the following command,
cdk deploy eks-cluster-stack
After successfully deploying the stack, Check the
Outputs
section of the stack. You will find the**ConfigCommand**
that allows yous to interact with your cluster usingkubectl
-
Stack: ssm-agent-installer-daemonset-stack This EKS AMI used in this stack does not include the AWS SSM Agent out of the box. If we ever want to patch or run something remotely on our EKS nodes, this agent is really helpful to automate those tasks. We will deploy a daemonset that will run exactly once? on each node using a cron entry injection that deletes itself after successful execution. If you are interested take a look at the daemonset manifest here
stacks/back_end/eks_cluster_stacks/eks_ssm_daemonset_stack/eks_ssm_daemonset_stack.py
. This is inspired by this AWS guidance.Initiate the deployment with the following command,
cdk deploy ssm-agent-installer-daemonset-stack
After successfully deploying the stack, You can connect to the worker nodes instance using SSM Session Manager.
-
-
To start with network policies, we need a plugin to enforce those policies. In this blog we will use Calico3. All the manifest to deploy the namespace, pods and network policies are available under this directory in this repo
stacks/k8s_utils/sample_manifests/
-
Install Calico on AWS EKS
We are using linux nodes in our cluster, initiate the installation with this command.
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml
Confirm that the
calico-system
daemonsets are running,kubectl get daemonset calico-node --namespace calico-system
Expected output,
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE calico-node 2 2 2 2 2 kubernetes.io/os=linux 29s
-
Create Namespace
As network policies are namespaced resources, let us begin by creating a new namespace.
kubectl apply -f miztiik-automation-ns.yml
-
Deploy Pods
We will create two pods,
- pod name:
k-shop-red
with labelrole:red
- pod name:
k-shop-blue
with labelrole:blue
kubectl create -f pod_red.yml kubectl create -f pod_blue.yml
Confirm pods are running and make a note of their IPs, we will use them later for testing.
kubectl get pods -o wide -n miztiik-automation-ns
Expected output,
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES k-shop-blue 1/1 Running 0 8s 10.10.0.210 ip-10-10-0-215.us-east-2.compute.internal <none> <none> k-shop-red 1/1 Running 0 91s 10.10.0.194 ip-10-10-0-215.us-east-2.compute.internal <none> <none>
- pod name:
-
Apply Network Policies Pods
We will create two policies, one to show allow and another to show deny
- ALLOW Policy -
allow-red-ingress-policy
. This policy allows pods with labelrole:red
to receive ingress traffic from any pod within the same namespace. You can extend this to allow ingress only from blue pods or only from certain ip address etc., - DENY Policy -
deny-blue-ingress-policy
. This policy denys all ingress traffic to pods with labelrole:blue
kubectl apply -f allow-red-ingress.yml kubectl apply -f deny-blue-ingress.yml
- ALLOW Policy -
-
Connect to Blue Pod To Test Red Ingress
Connect to the Blue pod and try to access red pod using curl or wget
kubectl -n miztiik-automation-ns exec --stdin --tty k-shop-blue -- /bin/bash
You should land in the container shell, you can use curl for blue pod ip address. It should time out eventually.
Expected output,
root@k-shop-blue:/# curl 10.10.0.194 <!DOCTYPE html> <html> <head> <title>Welc
As you can see, you can reach the red pod. If you create a container in another namespace and try to reach the red pod, it will time out.
-
Connect to Red Pod To Test Blue Ingress
Connect to the Red pod and try to access Blue pod using curl or wget
kubectl -n miztiik-automation-ns exec --stdin --tty k-shop-red -- /bin/bash
You should land in the container shell, you can use curl for red pod ip address,
Expected output,
root@k-shop-red:/# curl 10.10.0.210 curl: (7) Failed to connect to 10.10.0.210 port 80: Connection timed out root@k-shop-red:/#
As you can see, you cannot reach the blue pod. If you create a container in another namespace and try to reach the blue pod, it will time out.
-
-
Here we have demonstrated how to use Kubernetes network policies. These recommendations provide a good starting point, but network policies are much more complicated. If youβre interested in exploring them in more detail, check out these network policy recipes4.
-
If you want to destroy all the resources created by the stack, Execute the below command to delete the stack, or you can delete the stack from console as well
- Resources created during Deploying The Application
- Delete CloudWatch Lambda LogGroups
- Any other custom resources, you have created for this demo
# Delete from cdk cdk destroy # Follow any on-screen prompts # Delete the CF Stack, If you used cloudformation to deploy the stack. aws cloudformation delete-stack \ --stack-name "MiztiikAutomationStack" \ --region "${AWS_REGION}"
This is not an exhaustive list, please carry out other necessary steps as maybe applicable to your needs.
This repository aims to show how to use kubernetes network policies to secure AWS EKS to new developers, Solution Architects & Ops Engineers in AWS. Based on that knowledge these Udemy course #1, course #2 helps you build complete architecture in AWS.
Thank you for your interest in contributing to our project. Whether it is a bug report, new feature, correction, or additional documentation or solutions, we greatly value feedback and contributions from our community. Start here
Buy me a coffee β.
- Kubernetes Docs: Network Policies
- AWS Docs: Installing Calico on Amazon EKS
- Calico Docs: Network Security
- [Kubernetes Network Policy Recipes][10]
Level: 200