A smart proxy service that handles requests from Uffizzi API to the Kubernetes API
This application connects to a Kubernetes cluster to provision users' ephemeral environments (deployment workloads) on their behalf.
While it provides a documented REST API for anyone to use, it's designed to be used with the open-source Uffizzi API (uffizzi
).
To install the open-sourece version of Uffizzi, which includes this controller, see the official documentation.
For a detailed overview of Uffizzi architecture, see the official documenation.
Uffizzi consists of the following required components:
- Uffizzi API - The primary REST API for creating and managing Uffizzi environments
- Uffizzi Controller (this repository) - A smart proxy service that handles requests from Uffizzi API to the Kubernetes API
- Uffizzi Cluster Operator - A Kubernetes operator for managing virtual clusters
- Uffizzi CLI - A command-line interface for Uffizzi API
This uffizzi_controller
acts as a smart and secure proxy for uffizzi
and is designed to restrict required access to the k8s cluster. It accepts authenticated instructions from other Uffizzi components, then specifies Resources within the cluster's control API. It is implemented in Golang to leverage the best officially-supported Kubernetes API client.
The controller is required as a uffizzi
supporting service and serves these purposes:
- Communicate deployment instructions via native Golang API client to the designated Kubernetes cluster(s) from the Uffizzi interface
- Provide Kubernetes cluster information back to the Uffizzi interface
- Support restricted and secure connection between the Uffizzi interface and the Kubernetes cluster
main()
loop is withincmd/controller/controller.go
, which callssetup()
and handles exits. This initializesglobal
settings and thesentry
logging, connects to the database, initializes the Kubernetes clients, and starts the HTTP server listening.- An HTTP request for a new Deployment arrives and is handled within
internal/http/handlers.go
. The request contains the new Deployment integer ID. - The HTTP handler uses the ID as an argument to call the
ApplyDeployment
function withininternal/domain/deployment.go
. This takes a series of steps:- It then calls several methods from
internal/kuber/client.go
, which creates Kubernetes specifications for each k8s resource (Namespace, Deployment, NetworkPolicy, Service, etc.) and publishes them to the Cluster one at a time.- This function should return an IP address or hostname, which is added to the
data
for this Deployment'sstate
.
- This function should return an IP address or hostname, which is added to the
- It then calls several methods from
- Any errors are then handled and returned to the HTTP client.
This controller specifies custom Resources managed by popular open-source controllers:
You'll want these installed within the Cluster managed by this controller.
You can specify these within credentials/variables.env
for use with docker-compose
and our Makefile
.
Some of these may have defaults within configs/settings.yml
.
ENV
- Which deployment environment we're currently running within. Default:development
CONTROLLER_LOGIN
- The username to HTTP Basic AuthenticationCONTROLLER_PASSWORD
- The password to HTTP Basic AuthenticationCONTROLLER_NAMESPACE_NAME_PREFIX
- Prefix for Namespaces provisioned. Default:deployment
CERT_MANAGER_CLUSTER_ISSUER
- The issuer for signing certificates. Possible values:letsencrypt
(used by default)zerossl
POOL_MACHINE_TOTAL_CPU_MILLICORES
- Node resource to divide for Pods. Default: 2000POOL_MACHINE_TOTAL_MEMORY_BYTES
- Node recourse to divide for Pods. Default: 17179869184DEFAULT_AUTOSCALING_CPU_THRESHOLD
- Default: 75DEFAULT_AUTOSCALING_CPU_THRESHOLD_EPSILON
- Default: 8AUTOSCALING_MAX_PERFORMANCE_REPLICAS
- Horizontal Pod Autoscaler configuration. Default: 10AUTOSCALING_MIN_PERFORMANCE_REPLICAS
- Horizontal Pod Autoscaler configuration. Default: 1AUTOSCALING_MAX_ENTERPRISE_REPLICAS
- Horizontal Pod Autoscaler configuration. Default: 30AUTOSCALING_MIN_ENTERPRISE_REPLICAS
- Horizontal Pod Autoscaler configuration. Default: 3STARTUP_PROBE_DELAY_SECONDS
- Startup Probe configuration. Default: 10STARTUP_PROBE_FAILURE_THRESHOLD
- Startup Probe configuration. Default: 80STARTUP_PROBE_PERIOD_SECONDS
- Startup Probe configuration. Default: 15EPHEMERAL_STORAGE_COEFFICIENT
-LimitRange
configuration. Default: 1.9
This process expects to be provided a Kubernetes Service Account within a Kubernetes cluster. You can emulate this with these four pieces of configuration:
KUBERNETES_SERVICE_HOST
- Hostname (or IP) of the k8s API serviceKUBERNETES_SERVICE_PORT
- TCP port number of the k8s API service (usually443
.)KUBERNETES_NAMESPACE
- Namespace where both this controller andingress-nginx
reside/var/run/secrets/kubernetes.io/serviceaccount/token
- Authentication token/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- k8s API Server's x509 host certificate
Once you're configured to connect to your cluster (using kubectl
et al)
then you can get the value for these two environment variables from the output of
kubectl cluster-info
.
Add those two environment variables to credentials/variables.env
.
The authentication token must come from the cluster's cloud provider, e.g.
gcloud config config-helper --format="value(credential.access_token)"
The server certificate must also come from the cluster's cloud provider, e.g.
gcloud container clusters describe uffizzi-pro-production-gke --zone us-central1-c --project uffizzi-pro-production-gke --format="value(masterAuth.clusterCaCertificate)" | base64 --decode
You should write these two values to credentials/token
and credentials/ca.crt
and the make
commands and docker-compose
will copy them for you.
While developing, we most often run the controller within a shell on our workstations.
docker-compose
will set up this shell and mount the current working directory within the container so you can use other editors from outside.
To login into docker container just run:
make shell
All commands in this "Shell" section should be run inside this shell.
After making any desired changes, compile the controller:
go install ./cmd/controller/...
/go/bin/controller
Once you've configured access to your k8s Cluster (see above), you can test kubectl
within the shell:
kubectl --token=`cat /var/run/secrets/kubernetes.io/serviceaccount/token` --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt get nodes
In docker shell:
make test
make lint
make fix_lint
Once the controller is running on your workstation, you can make HTTP requests to it from outside of the shell.
curl localhost:8080 \
--user "${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}"
This will remove the specified Preview's Namespace and all other Resources.
curl -X POST localhost:8080/clean \
--user "${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}" \
-H "Content-Type: application/json" \
-d '{ "environment_id": 1 }'
Available at http://localhost:8080/docs/
Functional usage within a Kubernetes Cluster is beyond the scope of this document. For more, join us on Slack or contact us at [email protected].
That said, we've included a Kubernetes manifest to help you get started at infrastructure/controller.yaml
.
Review it and change relevant variables before applying this manifest.
You'll also need to install and configure the dependencies identified near the top of this document.