In this lab we will run virtual machines with multiple network interfaces (NICs) by leveraging the Multus and OVS CNI plugins.
Having these two plugins makes it possible to define multiple networks in our Kubernetes cluster. Once the Network Attachement Definitions (NADs) are present it's only a matter of defining the VM's NICs and link them to each network as needed.
WARNING: this lab contains deliberate errors!. Simple copy&paste will not work on this lab. This was done to help you understand what you are doing :), enjoy!
DISCLAIMER: This header is just for our own purpose, then you don't need execute it!:
mdsh-lang-bash() { shell; }
Since we are using the OVS CNI plugin, we need to configure dedicated Open vSwitch bridges.
The lab instances already have an OVS bridge named br1
.
As a reference, if we were to provision the bridge manually we do just need execute the following command:
You will need to execute ovs-vsctl
commands as a root, then just add sudo or impersonate as root user
sudo su -
## LAB008
ovs-vsctl add-br br1
To see the already provisioned bridge execute this command:
ovs-vsctl show
- Output:
48244a9a-507f-457e-ae78-574aa318d98e
Bridge "br1"
Port "eth1"
Interface "eth1"
type: internal
Port "br1"
Interface "br1"
type: internal
ovs_version: "2.0.0"
In a production environment, we would do the same on each of the cluster nodes and attach a dedicated physical interface to the bridge.
A NetworkAttachmentDefinition (a Custom Resource), within its config section, configures the CNI plugin. Among other settings there is the bridge itself, which is used by OVS to attach the Pod's virtual interface to it.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ovs-net-1
namespace: 'multus'
spec:
config: '{
"cniVersion": "0.3.1",
"type": "ovs",
"bridge": "br1"
}'
Let's now create a new NAD which will use the bridge br1:
kubectl config set-context --current --namespace=default
kubectl apply -f $HOME/student-materials/multus_nad_br1.yml
So far we've seen VirtualMachine resource definitions that connect to one single network, the cluster's default or PodNetwork.
To use the newly created network, the VirtualMachine object needs to reference it and include a new interface that will use it, similar to what we would do to attach volumes to a regular Pod:
...
...
interfaces:
- bridge: {}
name: default
- bridge: {}
macAddress: 20:37:cf:e0:ad:f1
name: ovs-net-1
...
...
networks:
- name: default
pod: {}
- multus:
networkName: ovs-net-1
name: ovs-net-1
Create two VMs named fedora-multus-1 and fedora-multus-2, both with a secondary NIC pointing to the previously created bridge/network attachment definition:
kubectl apply -f $HOME/student-materials/vm_multus1.yml
kubectl apply -f $HOME/student-materials/vm_multus2.yml
Locate the IPs of the two VMs, open two connections to your GCP instance and connect to both VM serial consoles:
virtctl console fedora-multus-1
virtctl console fedora-multus-2
Confirm the VMs got two network interfaces, eth0 and eth1:
[root@fedora-multus-1 ~]# ip a
- Output:
...OUTPUT...
2: eth0: ...
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default qlen 1000
link/ether 20:37:cf:e0:ad:f1 brd ff:ff:ff:ff:ff:ff
Both VMs should have already an IP address on their eth1, cloudinit was used to configure it. The fedora-multus-1 VM should have the IP address 11.0.0.5 and fedora-multus-2 VM should have 11.0.0.6, let's try to use ping or SSH to verify the connectivity:
ping 11.0.0.(5|6)
Or
ssh [email protected].(5|6)
To stop the vms, you could execute those commands:
virtctl stop fedora-multus-1
virtctl stop fedora-multus-2
And to erase the VMs just execute those ones:
kubectl delete vm fedora-multus-1 -n default
kubectl delete vm fedora-multus-2 -n default
<< Previous: Using the KubeVirt UI to interact with VMs | README | Next: Exploring Kubevirt metrics >>