Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SRIOV charts must be updated to dev-v2.8 #4654

Closed
manuelbuil opened this issue Aug 17, 2023 · 1 comment
Closed

SRIOV charts must be updated to dev-v2.8 #4654

manuelbuil opened this issue Aug 17, 2023 · 1 comment
Assignees

Comments

@manuelbuil
Copy link
Contributor

manuelbuil commented Aug 17, 2023

Is your feature request related to a problem? Please describe.

Branching https://github.com/rancher/charts requires us to upgrade the sriov charts to 103.x.x+up.y.z

How to do it is described here https://confluence.suse.com/display/EN/v2.8+Branching+Strategy (steps 4 & 5)

In the process, it would be good to update the sriov images version because there have been new releases in the meanwhile

Describe the solution you'd like

Describe alternatives you've considered

Additional context

@ShylajaDevadiga
Copy link
Contributor

Validated sriov charts are upgraded to 103.0.0+up0.1.0 on dev-v2.8

Deploy charts via helm using a two node, 1 server, 1 agent RKE2 v1.28.2+rke2r1 cluster

Config:

# cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: <TOKEN>
cni: multus,canal
# sudo /var/lib/rancher/rke2/bin/crictl -r /var/run/k3s/containerd/containerd.sock images|grep hardened-node-feature-discovery 
docker.io/rancher/hardened-node-feature-discovery                    v0.14.1-build20230926                      b1905ea8c20a8       127MB
thebe:~ # 

Upgraded charts

# helm list -A |grep sriov
sriov                           	kube-system	1       	2023-10-12 02:32:21.142932812 +0000 UTC	deployed	sriov-103.0.0+up0.1.0                       	1.2.0      
sriov-crd                       	default    	1       	2023-10-12 02:32:03.110855153 +0000 UTC	deployed	sriov-crd-103.0.0+up0.1.0                   	           
thebe:~ # 

Pod to pod communication between nodes

thebe:~ # kubectl get pods -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
multitool-deployment-5d7b465bb8-hkddf   1/1     Running   0          9s    10.42.0.27   thebe      <none>           <none>
multitool-deployment-5d7b465bb8-nmct2   1/1     Running   0          9s    10.42.1.17   themisto   <none>           <none>
thebe:~ # kubectl exec -it multitool-deployment-5d7b465bb8-hkddf bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:e7:ff:21:d5:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.42.0.27/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::98e7:ffff:fe21:d5f2/64 scope link 
       valid_lft forever preferred_lft forever
224: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 06:14:7b:13:3e:7e brd ff:ff:ff:ff:ff:ff
    altname enp3s17f5
    inet 192.168.0.10/24 brd 192.168.0.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::414:7bff:fe13:3e7e/64 scope link 
       valid_lft forever preferred_lft forever
bash-5.1# ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=64 time=0.031 ms
^C
--- 192.168.0.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
bash-5.1# ping 10.42.0.27
PING 10.42.0.27 (10.42.0.27) 56(84) bytes of data.
64 bytes from 10.42.0.27: icmp_seq=1 ttl=64 time=0.026 ms
^C
--- 10.42.0.27 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
bash-5.1# ping 10.42.1.17
PING 10.42.1.17 (10.42.1.17) 56(84) bytes of data.
64 bytes from 10.42.1.17: icmp_seq=1 ttl=62 time=0.579 ms
64 bytes from 10.42.1.17: icmp_seq=2 ttl=62 time=0.442 ms
^C
--- 10.42.1.17 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.442/0.510/0.579/0.068 ms
bash-5.1# 
thebe:~ # kubectl get nodes 
NAME       STATUS   ROLES                       AGE   VERSION
thebe      Ready    control-plane,etcd,master   22m   v1.28.2+rke2r1
themisto   Ready    <none>                      20m   v1.28.2+rke2r1
thebe:~ # kubectl get pods -A
NAMESPACE     NAME                                                   READY   STATUS    RESTARTS   AGE
default       multitool-deployment-5d7b465bb8-hkddf                  1/1     Running   0          11m
default       multitool-deployment-5d7b465bb8-nmct2                  1/1     Running   0          11m
kube-system   cloud-controller-manager-thebe                         1/1     Running   0          22m
kube-system   etcd-thebe                                             1/1     Running   0          21m
kube-system   kube-apiserver-thebe                                   1/1     Running   0          22m
kube-system   kube-controller-manager-thebe                          1/1     Running   0          22m
kube-system   kube-proxy-thebe                                       1/1     Running   0          22m
kube-system   kube-proxy-themisto                                    1/1     Running   0          20m
kube-system   kube-scheduler-thebe                                   1/1     Running   0          22m
kube-system   rke2-canal-k29zv                                       2/2     Running   0          21m
kube-system   rke2-canal-t4vzs                                       2/2     Running   0          20m
kube-system   rke2-coredns-rke2-coredns-67f86d96c-jw2nz              1/1     Running   0          14m
kube-system   rke2-coredns-rke2-coredns-67f86d96c-qfwn6              1/1     Running   0          14m
kube-system   rke2-coredns-rke2-coredns-autoscaler-d97d9cd9f-2fk5m   1/1     Running   0          14m
kube-system   rke2-ingress-nginx-controller-hjwzd                    1/1     Running   0          20m
kube-system   rke2-ingress-nginx-controller-q6kgl                    1/1     Running   0          21m
kube-system   rke2-metrics-server-c6fb46b64-99d8m                    1/1     Running   0          14m
kube-system   rke2-multus-ds-26qxc                                   1/1     Running   0          20m
kube-system   rke2-multus-ds-7gdhj                                   1/1     Running   0          21m
kube-system   rke2-snapshot-controller-59cc9cd8f4-hjsc7              1/1     Running   0          14m
kube-system   rke2-snapshot-validation-webhook-54c5989b65-m89tk      1/1     Running   0          14m
kube-system   sriov-74576778d4-6c5r4                                 1/1     Running   0          14m
kube-system   sriov-device-plugin-h6fpd                              1/1     Running   0          14m
kube-system   sriov-device-plugin-lx69j                              1/1     Running   0          14m
kube-system   sriov-network-config-daemon-sg4jj                      3/3     Running   0          16m
kube-system   sriov-network-config-daemon-v6dcz                      3/3     Running   0          16m
kube-system   sriov-rancher-nfd-gc-c9bcfb57-tgvn5                    1/1     Running   0          14m
kube-system   sriov-rancher-nfd-master-6c489fc49b-k82tw              1/1     Running   0          14m
kube-system   sriov-rancher-nfd-worker-8dd6w                         1/1     Running   0          19m
kube-system   sriov-rancher-nfd-worker-92dqw                         1/1     Running   0          19m
thebe:~ # 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants