Simple PoC to demonstrate scalable cloud patterns and advantages of adhering to the 12 Factor App methodology.
Althought a Docker Compose file is available, this PoC is designed to operate in a Kubernetes cluster and it has been configured to be Istio friendly.
kube-up and kube-down shell scripts are provided to start and shutdown the PoC.
gen-load can be used to generate test values from the command line instead of the browser.
% kube-up
Deploying Secrets
secret/vault created
Deploying RabbitMQ
service/rabbitmq-cluster-ip created
deployment.apps/rabbitmq-deployment created
Sleeping 30 sec to allow for RabbitMQ startup
▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇| 100%
Deploying Redis
service/redis-cluster-ip created
deployment.apps/redis-deployment created
Creating Peristent Volume Claim
persistentvolumeclaim/database-pvc created
Deploying Postgres
service/postgres-cluster-ip created
deployment.apps/postgres-deployment created
Deploying WORKER service
deployment.apps/worker-deployment created
Deploy WORKER horizontal pod autoscaler? (y/n)
y
Deploying WORKER horizontal pod autoscaler
horizontalpodautoscaler.autoscaling/worker-hpa created
Deploying API service
service/api-cluster-ip created
deployment.apps/api-deployment created
Deploying CLIENT service
service/client-cluster-ip created
deployment.apps/client-deployment created
Deploying ingress service
ingress.networking.k8s.io/ingress-service created
Finished deploying cloud native system
% kube-down
Deleting ingress service
ingress.networking.k8s.io "ingress-service" deleted
Deleting CLIENT service
service "client-cluster-ip" deleted
deployment.apps "client-deployment" deleted
Deleting API service
service "api-cluster-ip" deleted
deployment.apps "api-deployment" deleted
Deleting WORKER horizontal pod autoscaler
horizontalpodautoscaler.autoscaling "worker-hpa" deleted
Deleting WORKER service
deployment.apps "worker-deployment" deleted
Deleting Postgres
service "postgres-cluster-ip" deleted
deployment.apps "postgres-deployment" deleted
Delete Peristent Volume Claim? (y/n)
y
Deleting Peristent Volume Claim
persistentvolumeclaim "database-pvc" deleted
Deleting Redis
service "redis-cluster-ip" deleted
deployment.apps "redis-deployment" deleted
Deleting RabbitMQ
service "rabbitmq-cluster-ip" deleted
deployment.apps "rabbitmq-deployment" deleted
Deleting Secrets
secret "vault" deleted
Finished destroying cloud native system
% gen-load seq 10 3
Request 1
{"working":true} (1)
Request 2
{"working":true} (2)
Request 3
{"working":true} (0)
Request 4
{"working":true} (1)
Request 5
{"working":true} (2)
Request 6
{"working":true} (0)
Request 7
{"working":true} (1)
Request 8
{"working":true} (2)
Request 9
{"working":true} (0)
Request 10
{"working":true} (1)
% gen-load rep 5 ERR
Request 1
Invalid entry. Allowed range is 0 to 55 (ERR)
Request 2
Invalid entry. Allowed range is 0 to 55 (ERR)
Request 3
Invalid entry. Allowed range is 0 to 55 (ERR)
Request 4
Invalid entry. Allowed range is 0 to 55 (ERR)
Request 5
Invalid entry. Allowed range is 0 to 55 (ERR)
Upon entering an integer in the web front-end, located at http://localhost/ when deployed locally (http://localhost:8080/ if running via docker compose), the system will calculate the Fibonacci number using a very inefficient recursive algorithm running in O(cn) time. For a good analysis of other (better) implementations check Ali Dasdan's paper Twelve Simple Algorithms to Compute Fibonacci Numbers.
Values of 40 or less should complete within seconds with negligible resource consumption, while values approaching 50 will take exponentially longer and tax heavily the CPU. Values above 50 are not recommended, and values above 55 are ignored.
Although overly simplistic in their implementation, each service is designed to be event-driven, concurrent, stateless, disposable and capable of significant horizontal scaling. All of the system's state is managed by backing services ( PostgreSQL, RabbbitMQ and Redis).
When deployed in a Kubernetes cluster, a Horizontal Pod Autoscaler (HPA) monitors the Worker's CPU and will create additional instances automatically (up to 4 in this PoC) whenever its utilization is above 50% for 15 seconds or more. As the combined Workers' CPU utilization falls below that mark, the HPA will terminate unneeded instances.
The HPA's monitoring interval can be changed by setting the --horizontal-pod-autoscaler-sync-period
flag in the
cluster's default controller manager.