Lithops with Knative as serverless compute backend. Lithops also supports vanilla Knative for running applications. The easiest way to make it working is to create an IBM Kubernetes (IKS) cluster through the IBM dashboard. Alternatively you can use your own kubernetes cluster or a kind/minikube installation.
Note that Lithops automatically builds the default runtime the first time you run a script. For this task it uses the docker command installed locally in your machine.
- Install Knative backend dependencies:
python3 -m pip install lithops[knative]
-
Login to your docker account:
docker login
-
Choose one of these 3 installation options:
-
Start minikube with the 'ingress' addon:
minikube start --addons=ingress
-
Install a networking layer. Currently Lithops supports Kourier. Follow these instructions to install Kourier.
-
Edit your lithops config and add:
knative: ingress_endpoint : http://127.0.0.1:80
-
On a separate terminal, keep running:
minikube tunnel
-
Access to the IBM dashboard and create a new Kubernetes cluster.
-
Once the cluster is running, follow the instructions of the "Actions"--> "Connect via CLI" option of the dashboard to configure the kubectl client in your local machine.
-
Install a networking layer. Currently Lithops supports Kourier. Follow these instructions to install Kourier.
-
Install Kubernetes >= v1.16 and make sure the kubectl client is running.
-
Install a networking layer. Currently Lithops supports Kourier. Follow these instructions to install Kourier.
-
Make sure you have the ~/.kube/config file. Alternatively, you can set KUBECONFIG environment variable:
export KUBECONFIG=<path-to-kube-config-file>
-
Edit your lithops config and add the following keys:
lithops: backend: knative
To configure Lithops to access a private repository in your docker hub account, you need to extend the Knative config and add the following keys:
knative:
....
docker_server : docker.io
docker_user : <Docker hub Username>
docker_password : <DOcker hub access TOEKN>
To configure Lithops to access to a private repository in your IBM Container Registry, you need to extend the Knative config and add the following keys:
knative:
....
docker_server : us.icr.io
docker_user : iamapikey
docker_password : <IBM IAM API KEY>
docker_namespace : <namespace> # namespace name from https://cloud.ibm.com/registry/namespaces
Group | Key | Default | Mandatory | Additional info |
---|---|---|---|---|
knative | kubecfg_path | no | Path to kubecfg file. Mandatory if config file not in ~/.kube/config or KUBECONFIG env var not present |
|
knative | networking_layer | kourier | no | One of: kourier or istio |
knative | ingress_endpoint | no | Ingress endpoint. Make sure to use http:// prefix | |
knative | docker_server | docker.io | no | Container registry URL |
knative | docker_user | no | Container registry user name | |
knative | docker_password | no | Container registry password/token. In case of Docker hub, login to your docker hub account and generate a new access token here | |
knative | git_url | no | Git repository to build the image | |
knative | git_rev | no | Git revision to build the image | |
knative | max_workers | 100 | no | Max number of workers per FunctionExecutor() |
knative | worker_processes | 1 | no | Number of Lithops processes within a given worker. This can be used to parallelize function activations within a worker. It is recommendable to set this value to the same number of CPUs of the container. |
knative | runtime | no | Docker image name | |
knative | runtime_cpu | 1 | no | CPU limit. Default 1vCPU |
knative | runtime_memory | 512 | no | Memory limit in MB. Default 512 |
knative | runtime_timeout | 600 | no | Runtime timeout in seconds. Default 600 seconds |
knative | invoke_pool_threads | 100 | no | Number of concurrent threads used for invocation |
-
Verify that all the pods from the following namespaces are in Running status:
kubectl get pods -n knative-serving
-
Monitor how pods and other resources are created:
watch kubectl get pod,service,revision,deployment -o wide
Once you have your compute and storage backends configured, you can run a hello world function with:
lithops hello -b knative -s ibm_cos
You can view the function executions logs in your local machine using the lithops client:
lithops logs poll