Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node-ip rke2 flag isn't always passed to kubelet #4759

Closed
faelis opened this issue Sep 13, 2023 · 16 comments
Closed

node-ip rke2 flag isn't always passed to kubelet #4759

faelis opened this issue Sep 13, 2023 · 16 comments
Assignees

Comments

@faelis
Copy link
Contributor

faelis commented Sep 13, 2023

Environmental Info:
RKE2 Version:
rke2 version v1.25.9+rke2r1 (842d05e)

Node(s) CPU architecture, OS, and Version:
Linux XXX 5.4.0-156-generic #173-Ubuntu SMP Tue Jul 11 07:25:22 UTC 2023 x86_64 x86_64 x86 64 GNU/Linux

Cluster Configuration:
For the demo, I just build a 1master + 1worker cluster

Describe the bug:
When passing 'node-ip' flag to rke2, kubelet doesn't get its 'node-ip' flag.

Steps To Reproduce:
Build a 2 nodes cluster based on the following template :

  • 2 NIC by nodes (for exemple ens160 and ens192), each one plugged in a different vlan
  • default route is through ens192
  • rke2 'node-ip' flag is set with ens192 ip

Expected behavior:
rke2 'node-ip' flag must be passed to kubelet 'node-ip flag.

Actual behavior:
In this specfic case, kubelet start without 'node-ip' flag, and register itself in the cluster with the ens160 ip.

Additional context / logs:
I think it's because :

  • kubelet take the first NIC as default when no 'node-ip' is passed to it
  • rke2 take the NIC used by the default route as default
  • rke2 pass 'node-ip' flag to kubelet only if this ip is different from its default ip

I bypass this bug by setting 'kubelet-arg["node-ip"]' instead of using the flag 'node-ip'. Is this a good idea?

@brandond
Copy link
Member

kubelet take the first NIC as default when no 'node-ip' is passed to it
rke2 take the NIC used by the default route as default

RKE2 and the kubelet should use the same logic to select the node IP if it is not specified. We should be using the same Kubernetes utility function to determine the IP. Can you provide more information on what specifically you're seeing?

@faelis
Copy link
Contributor Author

faelis commented Sep 14, 2023

I'll try to be more specific :

In our organization, the VMs have 2 NICs (its mandatory). The first (ens160) has an ip which start with '172.*'. The second (ens192) has an ip which start with '10.*'.
The kubernetes nodes must be registred with the ip which start with '10.*'.
The default route is through the '10.*'.

If rke2 run without its 'node-ip' flag, kubelet register itself with the '172.*' ip, the wrong one.
If rke2 run with its 'node-ip' flag set with the '10.*' ip, kubelet doesn't inherit the flag, and therefore kubelet use the wrong ip.
If rke2 run with the 'kubelet-arg['node-ip'] set in its config file with the '10.*' ip, kubelet gets its 'node-ip' set with the correct ip.

It seem that rke2 (in fact k3s) use this function to define the default ip : https://github.com/kubernetes/apimachinery/blob/v0.28.2/pkg/util/net/interface.go#L366.

I tried to find the function used by kubelet but in vain. I can just see that kubelet always take the ip from the first valid NIC.

@brandond
Copy link
Member

Hmm. @manuelbuil @rbrtbnfgl do y'all have any ideas on what might be going on here? I don't think that we should be unconditionally adding a node-ip flag to the kubelet, but the behavior observed here does seem odd.

Can you show the output of ip route on one of the nodes in question?

@rbrtbnfgl
Copy link
Contributor

That's a bit odd. If node-ip is configured Kubelet should inherit it.
https://github.com/k3s-io/k3s/blob/master/pkg/daemons/agent/agent_linux.go#L127
RKE2 should execute that code from K3s when it starts Kubelet.
Could you check the logs when only node-ip is configured? You should see an entry saying Running kubelet ... with all the flags.

@faelis
Copy link
Contributor Author

faelis commented Sep 15, 2023

The line https://github.com/k3s-io/k3s/blob/550dd0578f79882e1a78d8468fdbefa95faa145c/pkg/daemons/agent/agent_linux.go#L126 test if the rke2 node-ip flag is the same as the default ip. kubelet inherit the node-ip flag only if rke2 node-ip flag is different than the default ip.
In my case, it's not different because the default ip (as rke2 see it) is the ip used by the node default route (https://github.com/kubernetes/apimachinery/blob/v0.28.2/pkg/util/net/interface.go#L366).

In the logs, kubelet doesn't have the node-ip flag. And in the ps command, kubelet start without the node-ip flag.

@brandond I'll post the ip route as soon as I'm sure there is no sensible/critical/secret information.

@rbrtbnfgl
Copy link
Contributor

So in your case Kubelet isn't selecting the default IP of the node.

@faelis
Copy link
Contributor Author

faelis commented Sep 15, 2023

In all the case I tested, kubelet always take the IP from the first valid NIC. It seems kubelet doesn't check the default route (I say that just from testing, I didn't managed to find the logic in the kubelet code).

@faelis
Copy link
Contributor Author

faelis commented Sep 15, 2023

@brandond

root@XXXXXX:~# ip r
default via 10.xxx.xxx.1 dev ens192 proto static
10.xxx.xxx.0/26 dev ens192 proto kernel scope link src 10.xxx.xxx.9
172.xxx.xxx.0/24 via 172.xxx.xxx.1 dev ens160
"
"
"
172.xxx.xxx.0/22 dev ens160 proto kernel scope link src 172.xxx.xxx.123
172.xxx.xxx.32/28 via 172.xxx.xxx.1 dev ens160
"
"
"
192.xxx.xxx.128/25 via 172.xxx.xxx.1 dev ens160
"
"
"
root@XXXXXX:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback xx:xx:xx:xx:xx:xx:xx brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
root@XXXXXX:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback xx:xx:xx:xx:xx:xx:xx brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 172.xxx.xxx.123/22 brd 172.xxx.xxx.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 10.xxx.xxx.9/26 brd 10.xxx.xxx.63 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever

According to the ISSP of my organisation, I had to replace lots of informations with 'xxx'. I this is enough for you.

@faelis
Copy link
Contributor Author

faelis commented Sep 18, 2023

New test on our infrastructure :
We change the order of the NIC : the ens160 has an ip which start with 10 and the ens192 has an ip that start with 172.
But kubelet still choose the ip that start with 172 (which is wrong for us).

We still can't confirm with the kubelet code the logic behind this behaviour, Maybe you can do it?

@brandond, @rbrtbnfgl : Can you confirm that override 'kubelet-arg["node-ip"]' is a good idea?

@rbrtbnfgl
Copy link
Contributor

yes. Passing node-ip to kubelet it's a good option when they don't match.

@faelis
Copy link
Contributor Author

faelis commented Sep 18, 2023

@rbrtbnfgl So there's no impact on k3s/rke2 side if we bypass its logic?

@rbrtbnfgl
Copy link
Contributor

no

@faelis
Copy link
Contributor Author

faelis commented Sep 18, 2023

Nice :)

Can we keep the issue open? Until rke2/k3s use the same logic as kubelet when choosing the IP for registering?

@faelis
Copy link
Contributor Author

faelis commented Sep 18, 2023

I think i found the logic behind kubelet : https://github.com/kubernetes/kubernetes/blob/82bca6304b5365f7df2627ad2a6fb3d4539bf36f/pkg/kubelet/nodestatus/setters.go#L189

And my tests seem to confirm this behaviour.

In our infrastructure, the DNS lookup from the node name always reply with the 172 ip.

@rbrtbnfgl Is it possible on rke2/k3s side to mimic this logic?

@rbrtbnfgl
Copy link
Contributor

yes. Probably we should always pass the node-ip to kubelet if it's configured on K3s.

@fmoral2
Copy link
Contributor

fmoral2 commented Oct 24, 2023

Validated on Version:

-$ rke2 version v1.28.3-rc2+rke2r1 (0d0d0e4879fdf95254461e3a49224f75d7b2dc3d)

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"

Cluster Configuration:
1 node servers

Steps to validate the fix

  1. Install rke2 in default config passing node ip as arg
  2. Check kubelet logs args to validate the default node-ip is there
  3. Validate nodes and pods are running and ok
  4. Install rke2 not passing node ip as arg
  5. Check kubelet logs args to validate the default node-ip is there
  6. Validate nodes and pods are running ok

Reproduction Issue:



--- With node ip

$ cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: test
node-ip: 172.31.30.62


$ sudo journalctl -xeu rke2-server.service | grep 'Running kubelet'
Oct 24 19:56:22 ip-172-31-30-62 rke2[1579]: time="2023-10-24T19:56:22Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-30-62 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"


~$ rke2 -v
rke2 version v1.25.9+rke2r1 (842d05e64bcbf78552f1db0b32700b8faea403a0)
go version go1.19.8 X:boringcrypto

$ cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: test
node-ip: 172.31.30.62




$ k get nodes -o wide
NAME              STATUS     ROLES                       AGE   VERSION          INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ip-172-31-30-62   Ready   control-plane,etcd,master   44s   v1.25.9+rke2r1   172.31.30.62   <none>        Ubuntu 22.04.1 LTS   5.15.0-1019-aws   containerd://1.6.19-k3s1


~$ k get pods -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-ip-172-31-30-62                1/1     Running     0          2m11s
kube-system   etcd-ip-172-31-30-62                                    1/1     Running     0          116s
kube-system   helm-install-rke2-canal-zfb7k                           0/1     Completed   0          115s
kube-system   helm-install-rke2-coredns-n96rk                         0/1     Completed   0          115s
kube-system   helm-install-rke2-ingress-nginx-g2nq2                   0/1     Completed   0          115s
kube-system   helm-install-rke2-metrics-server-qs8fg                  0/1     Completed   0          115s
kube-system   helm-install-rke2-snapshot-controller-crd-2cbxz         0/1     Completed   0          115s
kube-system   helm-install-rke2-snapshot-controller-qtxnk             0/1     Completed   1          115s
kube-system   helm-install-rke2-snapshot-validation-webhook-wplth     0/1     Completed   0          115s
kube-system   kube-apiserver-ip-172-31-30-62                          1/1     Running     0          2m2s
kube-system   kube-controller-manager-ip-172-31-30-62                 1/1     Running     0          2m11s
kube-system   kube-proxy-ip-172-31-30-62                              1/1     Running     0          2m8s
kube-system   kube-scheduler-ip-172-31-30-62                          1/1     Running     0          2m11s
kube-system   rke2-canal-phzz2                                        2/2     Running     0          103s
kube-system   rke2-coredns-rke2-coredns-6b9548f79f-qpz48              1/1     Running     0          105s
kube-system   rke2-coredns-rke2-coredns-autoscaler-57647bc7cf-kt88n   1/1     Running     0          105s
kube-system   rke2-ingress-nginx-controller-cgw4q                     0/1     Running     0          29s
kube-system   rke2-metrics-server-7d58bbc9c6-4sflm                    1/1     Running     0          45s
kube-system   rke2-snapshot-controller-7b5b4f946c-c94g8               1/1     Running     0          39s
kube-system   rke2-snapshot-validation-webhook-7748dbf6ff-55c5w       1/1     Running     0          46s








---- without node ip 

~$ cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: test


~$ rke2 -v
rke2 version v1.25.9+rke2r1 (842d05e64bcbf78552f1db0b32700b8faea403a0)
go version go1.19.8 X:boringcrypto


~$ k get nodes -o wide
NAME              STATUS   ROLES                       AGE     VERSION          INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ip-172-31-30-62   Ready    control-plane,etcd,master   4m15s   v1.25.9+rke2r1   172.31.30.62   <none>        Ubuntu 22.04.1 LTS   5.15.0-1019-aws   containerd://1.6.19-k3s1

~$ k get pods -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS      AGE
kube-system   cloud-controller-manager-ip-172-31-30-62                1/1     Running     2 (70s ago)   4m19s
kube-system   etcd-ip-172-31-30-62                                    1/1     Running     0             4m4s
kube-system   helm-install-rke2-canal-zfb7k                           0/1     Completed   0             4m3s
kube-system   helm-install-rke2-coredns-n96rk                         0/1     Completed   0             4m3s
kube-system   helm-install-rke2-ingress-nginx-g2nq2                   0/1     Completed   0             4m3s
kube-system   helm-install-rke2-metrics-server-qs8fg                  0/1     Completed   0             4m3s
kube-system   helm-install-rke2-snapshot-controller-crd-2cbxz         0/1     Completed   0             4m3s
kube-system   helm-install-rke2-snapshot-controller-qtxnk             0/1     Completed   1             4m3s
kube-system   helm-install-rke2-snapshot-validation-webhook-wplth     0/1     Completed   0             4m3s
kube-system   kube-apiserver-ip-172-31-30-62                          1/1     Running     1 (55s ago)   4m10s
kube-system   kube-controller-manager-ip-172-31-30-62                 1/1     Running     3 (54s ago)   4m19s
kube-system   kube-proxy-ip-172-31-30-62                              1/1     Running     0             4m16s
kube-system   kube-scheduler-ip-172-31-30-62                          1/1     Running     1 (74s ago)   4m19s
kube-system   rke2-canal-phzz2                                        2/2     Running     0             3m51s
kube-system   rke2-coredns-rke2-coredns-6b9548f79f-qpz48              1/1     Running     0             3m53s
kube-system   rke2-coredns-rke2-coredns-autoscaler-57647bc7cf-kt88n   1/1     Running     0             3m53s
kube-system   rke2-ingress-nginx-controller-cgw4q                     1/1     Running     0             2m37s
kube-system   rke2-metrics-server-7d58bbc9c6-4sflm                    1/1     Running     0             2m53s
kube-system   rke2-snapshot-controller-7b5b4f946c-c94g8               1/1     Running     1 (73s ago)   2m47s
kube-system   rke2-snapshot-validation-webhook-7748dbf6ff-55c5w       1/1     Running     0             2m54s



~$ sudo journalctl -xeu rke2-server.service | grep 'Running kubelet'
Oct 24 19:56:22 ip-172-31-30-62 rke2[1579]: time="2023-10-24T19:56:22Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-30-62 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"
Oct 24 19:59:51 ip-172-31-30-62 rke2[8688]: time="2023-10-24T19:59:51Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-30-62 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"




Validation Results:


------without  node ip 

$ cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: test




~$ rke2 -v
rke2 version v1.28.3-rc2+rke2r1 (0d0d0e4879fdf95254461e3a49224f75d7b2dc3d)
go version go1.20.10 X:boringcrypto



sudo journalctl -xeu rke2-server.service | grep 'Running kubelet'
Oct 24 19:45:25 ip-172-31-20-152 rke2[1642]: time="2023-10-24T19:45:25Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-20-152 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=172.31.20.152 --node-labels= --pod-infra-container-image=index.docker.io/rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"


------  --node-ip=172.31.20.15   -------- 

    





----  with node ip   --------

     cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: test
node-ip: 172.31.20.152


~$ rke2 -v
rke2 version v1.28.3-rc2+rke2r1 (0d0d0e4879fdf95254461e3a49224f75d7b2dc3d)
go version go1.20.10 X:boringcrypto

:~$ sudo journalctl -xeu rke2-server.service | grep 'Running kubelet'
Oct 24 19:45:25 ip-172-31-20-152 rke2[1642]: time="2023-10-24T19:45:25Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-20-152 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=172.31.20.152 --node-labels= --pod-infra-container-image=index.docker.io/rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"
Oct 24 19:49:12 ip-172-31-20-152 rke2[11194]: time="2023-10-24T19:49:12Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-20-152 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=172.31.20.152 -



-------     --node-ip=172.31.20.152     ---------





$ k get nodes
NAME               STATUS   ROLES                       AGE     VERSION
ip-172-31-20-152   Ready    control-plane,etcd,master   5m27s   v1.28.3+rke2r1


$ k get pods -A
NAMESPACE     NAME                                                   READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-ip-172-31-20-152              1/1     Running     0          5m43s
kube-system   etcd-ip-172-31-20-152                                  1/1     Running     0          5m39s
kube-system   helm-install-rke2-canal-9xbhj                          0/1     Completed   0          5m29s
kube-system   helm-install-rke2-coredns-mtg9j                        0/1     Completed   0          5m29s
kube-system   helm-install-rke2-ingress-nginx-rm5qm                  0/1     Completed   0          5m29s
kube-system   helm-install-rke2-metrics-server-vk9z6                 0/1     Completed   0          5m29s
kube-system   helm-install-rke2-snapshot-controller-crd-7hvrm        0/1     Completed   0          5m29s
kube-system   helm-install-rke2-snapshot-controller-vn64b            0/1     Completed   1          5m29s
kube-system   helm-install-rke2-snapshot-validation-webhook-rpnwq    0/1     Completed   0          5m29s
kube-system   kube-apiserver-ip-172-31-20-152                        1/1     Running     0          2m9s
kube-system   kube-controller-manager-ip-172-31-20-152               1/1     Running     0          5m43s
kube-system   kube-proxy-ip-172-31-20-152                            1/1     Running     0          5m41s
kube-system   kube-scheduler-ip-172-31-20-152                        1/1     Running     0          5m43s
kube-system   rke2-canal-689hj                                       2/2     Running     0          5m18s
kube-system   rke2-coredns-rke2-coredns-6b795db654-nrgl5             1/1     Running     0          5m19s
kube-system   rke2-coredns-rke2-coredns-autoscaler-945fbd459-q68hx   1/1     Running     0          5m19s
kube-system   rke2-ingress-nginx-controller-7bkjw                    1/1     Running     0          4m16s
kube-system   rke2-metrics-server-544c8c66fc-wqkhn                   1/1     Running     0          4m33s
kube-system   rke2-snapshot-controller-59cc9cd8f4-j79rx              1/1     Running     0          4m27s
kube-system   rke2-snapshot-validation-webhook-54c5989b65-4fhjt      1/1     Running     0          4m32s




Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants