This Python script automates the creation of an AWS EKS cluster using Terraform, deploys microservices across multiple namespaces, installs Prometheus and Kubescape, and allows for cluster destruction once testing is complete. It also checks for any pods in CrashLoopBackOff
state and handles parallel application of services.
- Create an EKS cluster using Terraform with
m5.xlarge
EC2 instances. - Scale the number of nodes in the cluster dynamically.
- Create twice as many namespaces as the number of nodes in the cluster
- Deploy microservices across multiple namespaces in parallel.
- Install Prometheus stack for monitoring.
- Optionally skip the cluster creation and only connect to an existing cluster.
- Apply Kubescape.
- Destroy the cluster and associated infrastructure.
Before running the script, ensure the following tools are installed:
- AWS CLI (for connecting to EKS)
- kubectl (for managing Kubernetes clusters)
- Python 3.x (for running the script)
Clone the repository:
git clone https://github.com/armosec/perfornamce.git
cd perfornamce
Argument Type Default Description Example
-nodes
: Specifies the number of nodes for the EKS cluster. The script will add 1 more node to this number (The deaful is 2)
-kdr
:Enable Kubescape runtime detection capabilities during the installation of "kubescpe".
-destroy
:Destroys the Terraform-managed infrastructure, including the EKS cluster.
-skip-cluster
:Skips the cluster creation and only connects to an existing EKS cluster.
-account
: The account ID for deploying Kubescape. Required when deploying Kubescape.
-accessKey
: The access key for deploying Kubescape. Required when deploying Kubescape.
-version
: Specify the version of Kubescape to be deployed. If not provided, the latest version is used.
-
To create an EKS cluster with a specific number of nodes, use the following command:
python performance.py -nodes 10
-
To create an EKS cluster with a specific number of nodes and enable KDR
python performance.py - nodes 10 -kdr
-
Skip Cluster Creation and Connect to an Existing Cluster:
python performance.py -skip-cluster
-
Create cluster and apply "Kubescape", you need account ID and Access key
python performance.py -nodes 10 -account <your-account-id> -accessKey <your-access-key>
-
Apply spesific version of "Kubescape"
python performance.py -nodes 10 -account <your-account-id> -accessKey <your-access-key> -version <version>
-
To destroy the cluater
python performance.py -destroy
** when you run terraform destroy
, you must keep your terminal on when you are running this operation, otherwise, it’s will be canceled.
note:
-
Provisioning an additional node for Prometheus and Kubescape: The script will provision one extra node beyond the specified number of nodes in the cluster. This additional node is reserved for deploying Prometheus and Kubescape and is not used in the namespace calculation. The namespaces are created based on twice the number of originally specified nodes, without including the extra node for Prometheus and Kubescape.
For example, if you specify 5 nodes, the script will provision 6 nodes but calculate namespaces as 2 * 5, resulting in 10 namespaces.
After deploying the Prometheus stack with Grafana, you can expose Grafana and retrieve the admin password using the following steps:
-
Expose Grafana using port-forwarding: To make Grafana accessible from your local machine, use the following kubectl port-forward command:
kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80
This will forward port 3000 on your local machine to port 80 of the Grafana service. You can now access Grafana by visiting http://localhost:3000 in your browser.
-
Retrieve the Grafana admin password: To get the Grafana admin password, run the following command:
kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
This will output the Grafana admin password, which you can use to log in.