diff --git a/README.md b/README.md index df6a930..433bce8 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,11 @@ The SaladCloud Virtual Kubelet Provider creates a _virtual node_ in your K8s clu To the K8s API, it looks like a real node. However, when you schedule a pod on the virtual node, a container group deployment is created using the SaladCloud API instead of running the pod on a node in the K8s cluster. The container group deployment runs the pod on a remote, GPU-enabled node on the SaladCloud network. +## Demo + +This was used in a presentation at KubeCon2023, the script and pod spec file for the QR code workload are in +the [demo](demo) directory. + ## Development Follow the steps below to get started with local development. diff --git a/demo/README.md b/demo/README.md new file mode 100644 index 0000000..05f6ea9 --- /dev/null +++ b/demo/README.md @@ -0,0 +1,32 @@ +# Salad Cloud Virtual Kubelet Demo + +These files were originally used in dtroyer's KubeCon2023 talk. + +## Run the Demo + +This outlines running the demo on Docker Desktop's Kubernetes implementation: + +* Set up the environment with a .env file (or equivalent) similar to this one: + ```bash + export LOG_LEVEL=INFO + export SCE_API_KEY= + export SCE_ORGANIZATION_NAME=salad + export SCE_PROJECT_NAME=demo + export NAMESPACE=saladcloud-demo + export NODE_NAME=${SCE_PROJECT_NAME} + ``` +* Start with a fresh Kubernetes environment. Only the docker-desktop conrol plane should be displayed + with `kubectl get node` and `kubectl get pod`. `demo.sh status` will run both of those commands + at once. +* Run the virtual kubelet via its Helm chart using `demo.sh start`. +* Run `demo.sh status` again to see that the virtual kubelet is registered as an agent +* Start the QR code workload with `kubectl apply qr.yaml`. +* Run `demo.sh status` again to see two additional pods listed that are the requested container groups. + The pod names should match the Container Groups in the Salad Portal (NOTE: with a prefix!) +* Note that Kubernetes pods are mapped to SCE Container Groups 1:1, specifying 2 replicas on the YAML spec + will result in two Contarer Groups being created. At this time there is not a mechanism to specify the + Container Group replicas and it will always be 1. +* Once a Container Group is shown running, grab the assigned URL from the portal and paste it into a browser. + Commence generating crazy QR codes that look like city skylines or a plaid flannel shirt. +* Run `kubectl delete -f qr.yaml` to stop the Container Groups. +* Run `demo.sh stop` to stop the virtual kubelet pod. diff --git a/demo/demo.sh b/demo/demo.sh new file mode 100755 index 0000000..c13c3c0 --- /dev/null +++ b/demo/demo.sh @@ -0,0 +1,61 @@ +#!/bin/bash +# demo.sh - Control demo saladcloud-virtual-kublet instance in K8s cluster + +# Typically a .env file similar to the following is used to configure the virtual kubelet +# export LOG_LEVEL=INFO +# export SCE_API_KEY= +# export SCE_ORGANIZATION_NAME=salad +# export SCE_PROJECT_NAME=demo +# export NAMESPACE=saladcloud-demo +# export NODE_NAME=${SCE_PROJECT_NAME} + +TOP_DIR=$(cd $(dirname "$0") && pwd) + +# Script lives in a subdirectory of the top repo dir +pushd $TOP_DIR/.. >/dev/null + +IMAGE_TAG=${IMAGE_TAG:-latest} +NAMESPACE=${NAMESPACE:-saladcloud-demo} +NODE_NAME=${NODE_NAME:-demo} + +if [[ "$1" == "start" ]]; then + shift + CMD=" \ + helm install \ + --create-namespace \ + --namespace ${NAMESPACE} \ + --set salad.organizationName=${SCE_ORGANIZATION_NAME} \ + --set salad.projectName=${SCE_PROJECT_NAME} \ + --set provider.image.tag=${IMAGE_TAG} \ + --set provider.nodename=${NODE_NAME}-vk \ + ${NODE_NAME} \ + ./charts/virtual-kubelet" + echo $CMD + $CMD \ + --set salad.apiKey=${SCE_API_KEY} \ + --set provider.logLevel=${LOG_LEVEL} + +elif [[ "$1" == "stop" ]]; then + shift + helm uninstall \ + --namespace ${NAMESPACE} \ + ${NODE_NAME} +elif [[ "$1" == "status" ]]; then + echo "" + echo "$ kubectl get node" + kubectl get node + echo "" + echo "$ kubectl --namespace ${NAMESPACE} get pod" + kubectl --namespace ${NAMESPACE} get pod +elif [[ "$1" == "logs" ]]; then + podname=$(kubectl --namespace ${NAMESPACE} get pod|awk "/${NODE_NAME}/ { print \$1 }") + kubectl --namespace ${NAMESPACE} logs $podname +elif [[ "$1" == "apply" ]]; then + # Launch the qr-code + kubectl apply -f demo/qr.yaml +elif [[ "$1" == "delete" ]]; then + # Delete the qr-code + kubectl delete -f demo/qr.yaml +fi + +popd >/dev/null diff --git a/demo/qr.yaml b/demo/qr.yaml new file mode 100644 index 0000000..f0a75d8 --- /dev/null +++ b/demo/qr.yaml @@ -0,0 +1,59 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: qr-code-demo + name: qr-code + namespace: saladcloud-demo +spec: + replicas: 2 + selector: + matchLabels: + app: qr-code-demo + template: + metadata: + annotations: + salad.com/country-codes: us + salad.com/networking-protocol: "http" + salad.com/networking-port: "1234" + salad.com/networking-auth: "false" + salad.com/gpu-classes: "dec851b7-eba7-4457-a319-a01b611a810e" + # salad.com/gpu-classes: "cb6c1931-89b6-4f76-976f-54047320ccc6" + labels: + app: qr-code-demo + spec: + containers: + - image: saladtechnologies/stable-fast-qr-code:latest-baked + name: qr-code + resources: + limits: + cpu: 2 + memory: 8192 + env: + - name: HOST + value: "*" + - name: PORT + value: "1234" + startupProbe: + exec: + command: [ "curl", "--fail", "http://localhost:1234/hc" ] + initialDelaySeconds: 60 + failureThreshold: 60 + periodSeconds: 10 + livenessProbe: + exec: + command: [ "curl", "--fail", "http://localhost:1234/hc" ] + initialDelaySeconds: 60 + failureThreshold: 60 + periodSeconds: 10 + nodeSelector: + kubernetes.io/role: agent + type: virtual-kubelet + os: + name: linux + restartPolicy: Always + tolerations: + - key: virtual-kubelet.io/provider + operator: Equal + value: saladcloud + effect: NoSchedule