Refer to Serverless Functions atop of Cloud-Edge Deployment tutorial to understand its application.
OpenYurt
extends Kubernetes from a single cluster to a federation of clusters, offers a seamless way to manage distributed nodes and applications efficiently. This guide outlines how to establish a Kubernetes cluster via kubeadm and then deploy OpenYurt
and Knative on it, which is compatible with vHive stock-only mode.
1.2 About OpenYurt
OpenYurt
is designed to ensure that applications running at the edge of the network remain operational, even in poor connectivity conditions. OpenYurt
addresses this issue by implementing a per-node proxy (YurtHub
) along with local storage to cache the state of the cloud apiserver. Consequently, when a node loses its connection, the cached states remain accessible to Kubelet, KubeProxy
, and any user Pods.
In an edge computing Kubernetes cluster, nodes are often distributed across various geographical regions. The edge nodes fell within same physical region are referred to as Pools
in OpenYurt
. Consequently, when relying on a native Container Network Interface (CNI) solution, Pods within different NodePools
may be unable to communicate using Pod IP, Service IP, or Node IP, particularly if each NodePool
resides within its own isolated LAN. OpenYurt
introduces a gateway called Raven
, which offers a networking solution that enables cross-NodePool
communication within an OpenYurt
cluster.
In order to manage applications and traffic in multiple node pools conveniently, YurtAppSet
and YurtAppDaemon
are introduced for managing workloads in multi-nodepool
, and service topology capability (geographical location, network latency, etc.) for routing traffic in node pool.
Prerequisite of nodes:
- The scripts has been tested on cloud-lab, suggested profile is:
openyurt-demo
, with one master node, one cloud worker node and one edge worker node - Ensure that SSH authentication is possible from local device to all nodes.
Components:
Files | Purpose |
---|---|
main.go | script entry point |
conf.json |
json files that stores cluster's configuration |
node.go | node structure and related functions for Section 3.2 |
configs | yaml templates |
Description
- Prepare system environment for all nodes, installing kubeadm, kubectl, dependencies, etc.
- On master node, init the cluster using
kubeadm init
and in each worker node, join the initialized cluster. - On top of the created cluster, init
openyurt
cluster both on master nodes and worker nodes, then expand to all worker nodes from master nodes. - (Optional) Deploy Knative (vHive stock-only mode compatible)
./openyurt_deployer clean # clean the openyurt cluster and restore it to initial state
- Prepare a cluster with at least two nodes.
- Change the contents in
conf.json
to following format:
{
"master": "user@master",
"workers": {
"cloud": [
"user@cloud-0"
],
"edge": [
"user@edge-0"
]
}
}
go build .
./openyurt_deployer k8s
Note:
If it gives out error like:
FATA[0001] Failed to connect to: username@host
Please execute:
eval `ssh-agent -s` && ssh-add ~/.ssh/<your private key>
For example:
eval `ssh-agent -s` && ssh-add ~/.ssh/id_rsa
And try again
./openyurt_deployer knative
./openyurt_deployer openyurt
*Referenced from OpenYurt
This will set up the node pool management
./openyurt_deployer demo
The demo would deploy a helloworld function to cloud node pool or edge node pool
Deploy the demo:
./openyurt_deployer demo-deploy-on-cloud
or:
./openyurt_deployer demo-deploy-on-edge
where demo-deploy-on-cloud
would deploy the service to the cloud node pool and demo-deploy-on-edge
would deploy the service to the edge node pool.
The demo code will also show information about node pool after deployment.
The name for demo-deploy-on-cloud
would be helloworld-cloud
, while the name for demo-deploy-on-edge
would be helloworld-edge
It will also show the services' URL
so you can try to invoke it on the master node.
You can check the node pool information simply by:
./openyurt_deployer demo-print
The demo code will also show information about node pool after deployment.
You can try to invoke the services created by demo-deploy-on-cloud
or demo-deploy-on-edge
on master node.
First, ssh to master node, following commands should all be executed on master node.
ssh <master-user>@<master-ip>
git clone https://github.com/vhive-serverless/vSwarm.git
cd vSwarm/tools/test-client && go build .
./test-client --addr $URL:80 --name "Hello there"
Here $URL
should be the URL
returned in the previous part when deploying cloud and edge services, you can also get it from: kubectl get ksvc
, but discarding the http://
at the beginning.
After invoking, you can use kubectl get pods -o wide
to check whether the pods have been auto-scaled.