Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setting HA on opensuse microOS #4908

Closed
colaH16 opened this issue Oct 18, 2023 · 6 comments
Closed

setting HA on opensuse microOS #4908

colaH16 opened this issue Oct 18, 2023 · 6 comments

Comments

@colaH16
Copy link

colaH16 commented Oct 18, 2023

Environmental Info:
RKE2 Version:

rke2 version v1.27.6+rke2r1 (5cc9c77)
go version go1.20.8 X:boringcrypto

Node(s) CPU architecture, OS, and Version:

Linux se4.cola23subnet.cola123.oraclevcn.com 6.5.6-1-default #1 SMP PREEMPT_DYNAMIC Fri Oct 6 11:20:48 UTC 2023 (c97c2df) aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

to be: 3servers, 2 agents

Describe the bug:

kubelet and kube-proxy container not run correctly

Steps To Reproduce:

  • Installed RKE2:
    Install 1st server whit this /etc/rancher/rke2/config.yaml
selinux: true
advertise-address: 100.76.24.128

write-kubeconfig-mode: "0660"
token: rancher-api.cola23subnet.cola123.oraclevcn.com
tls-san:
  - rancher-api.cola23subnet.cola123.oraclevcn.com
  - rancher-k3s-se-oci.cola16.dev
  - 100.76.24.128
  - 100.97.232.52
  - 100.92.67.43

start rke2-server on 2nd server with this /etc/rancher/rke2/config.yaml

server: https://100.76.24.128:9345
token: K1069e1d22f1a374c7d6af74840c6ea3f0d33f0207fa49a6fe86acd38110b86114f::server:9bca291807e3acae43aeafeabd168680

selinux: true
# bind-address: 192.168.100.1
# se4 
advertise-address: 100.97.232.52\


write-kubeconfig-mode: "0660"
tls-san:
  - rancher-api.cola23subnet.cola123.oraclevcn.com
  - rancher-k3s-se-oci.cola16.dev
  - 100.76.24.128
  - 100.97.232.52
  - 100.92.67.43

Expected behavior:

runing kubelet and kube-proxy container

Actual behavior:

keep dyed that containers

Additional context / logs:

sudo /usr/local/bin/ctr  --address /run/k3s/containerd/containerd.sock -n k8s.io event
2023-10-18 11:00:57.088908012 +0000 UTC k8s.io /snapshot/prepare {"key":"7f23432449ccb38e4694061ab93ad94751dcb46514585b7eec3740b463b3a70a","parent":"sha256:c640e628658788773e4478ae837822c9bc7db5b512442f54286a98ad50f88fd4","snapshotter":"overlayfs"}
2023-10-18 11:00:57.096623314 +0000 UTC k8s.io /containers/create {"id":"7f23432449ccb38e4694061ab93ad94751dcb46514585b7eec3740b463b3a70a","image":"docker.io/rancher/mirrored-pause:3.6","runtime":{"name":"io.containerd.runc.v2","options":{"type_url":"containerd.runc.v1.Options","value":"SAE="}}}
2023-10-18 11:00:57.896137994 +0000 UTC k8s.io /snapshot/remove {"key":"7f23432449ccb38e4694061ab93ad94751dcb46514585b7eec3740b463b3a70a","snapshotter":"overlayfs"}
2023-10-18 11:00:57.902759768 +0000 UTC k8s.io /containers/delete {"id":"7f23432449ccb38e4694061ab93ad94751dcb46514585b7eec3740b463b3a70a"}
2023-10-18 11:01:11.079705085 +0000 UTC k8s.io /snapshot/prepare {"key":"7a7b11986e3a03eedee58674cbdd0716a24d34a4017887d3d981f3417223318b","parent":"sha256:c640e628658788773e4478ae837822c9bc7db5b512442f54286a98ad50f88fd4","snapshotter":"overlayfs"}
2023-10-18 11:01:11.09014821 +0000 UTC k8s.io /containers/create {"id":"7a7b11986e3a03eedee58674cbdd0716a24d34a4017887d3d981f3417223318b","image":"docker.io/rancher/mirrored-pause:3.6","runtime":{"name":"io.containerd.runc.v2","options":{"type_url":"containerd.runc.v1.Options","value":"SAE="}}}
2023-10-18 11:01:11.812157861 +0000 UTC k8s.io /snapshot/remove {"key":"7a7b11986e3a03eedee58674cbdd0716a24d34a4017887d3d981f3417223318b","snapshotter":"overlayfs"}
2023-10-18 11:01:11.81814623 +0000 UTC k8s.io /containers/delete {"id":"7a7b11986e3a03eedee58674cbdd0716a24d34a4017887d3d981f3417223318b"}
2023-10-18 11:01:25.082599537 +0000 UTC k8s.io /snapshot/prepare {"key":"8be7e93f425e95f33c42461659c6d5b99760fbdc845d4a08abefc0e644f1b89e","parent":"sha256:c640e628658788773e4478ae837822c9bc7db5b512442f54286a98ad50f88fd4","snapshotter":"overlayfs"}
2023-10-18 11:01:25.090321919 +0000 UTC k8s.io /containers/create {"id":"8be7e93f425e95f33c42461659c6d5b99760fbdc845d4a08abefc0e644f1b89e","image":"docker.io/rancher/mirrored-pause:3.6","runtime":{"name":"io.containerd.runc.v2","options":{"type_url":"containerd.runc.v1.Options","value":"SAE="}}}
2023-10-18 11:01:25.812092009 +0000 UTC k8s.io /snapshot/remove {"key":"8be7e93f425e95f33c42461659c6d5b99760fbdc845d4a08abefc0e644f1b89e","snapshotter":"overlayfs"}
2023-10-18 11:01:25.818611742 +0000 UTC k8s.io /containers/delete {"id":"8be7e93f425e95f33c42461659c6d5b99760fbdc845d4a08abefc0e644f1b89e"}

how can I get the dead continaer's logs?

@colaH16
Copy link
Author

colaH16 commented Oct 18, 2023

the three nodes has ceph storage with podman.

@colaH16
Copy link
Author

colaH16 commented Oct 18, 2023

sudo /usr/local/bin/rke2 server

WARN[0000] not running in CIS mode
INFO[0000] Applying Pod Security Admission Configuration
INFO[0000] Starting rke2 v1.27.6+rke2r1 (5cc9c774d6bf349b9ca3bcfab2a6010554fcffa7)
INFO[0000] Managed etcd cluster not yet initialized
INFO[0000] Reconciling bootstrap data between datastore and disk
INFO[0000] start
INFO[0000] schedule, now=2023-10-18T20:46:40+09:00, entry=1, next=2023-10-19T00:00:00+09:00
INFO[0000] Running kube-apiserver --advertise-address=100.97.232.52 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
INFO[0000] Removed kube-apiserver static pod manifest
INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/rke2/server/etc/cloud-config.yaml --cloud-provider=rke2 --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route,-service --kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --leader-elect-resource-name=rke2-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false
INFO[0000] Server node token is available at /var/lib/rancher/rke2/server/token
INFO[0000] To join server node to cluster: rke2 server -s https://10.123.23.4:9345 -t ${SERVER_NODE_TOKEN}
INFO[0000] Agent node token is available at /var/lib/rancher/rke2/server/agent-token
INFO[0000] To join agent node to cluster: rke2 agent -s https://10.123.23.4:9345 -t ${AGENT_NODE_TOKEN}
INFO[0000] Wrote kubeconfig /etc/rancher/rke2/rke2.yaml
INFO[0000] Run: rke2 kubectl
INFO[0000] Waiting for cri connection: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory"
INFO[0001] Password verified locally for node se4.cola23subnet.cola123.oraclevcn.com
INFO[0001] certificate CN=se4.cola23subnet.cola123.oraclevcn.com signed by CN=rke2-server-ca@1697623433: notBefore=2023-10-18 10:03:53 +0000 UTC notAfter=2024-10-17 11:46:41 +0000 UTC
INFO[0001] certificate CN=system:node:se4.cola23subnet.cola123.oraclevcn.com,O=system:nodes signed by CN=rke2-client-ca@1697623433: notBefore=2023-10-18 10:03:53 +0000 UTC notAfter=2024-10-17 11:46:41 +0000 UTC
INFO[0002] Module overlay was already loaded
INFO[0002] Module nf_conntrack was already loaded
INFO[0002] Module br_netfilter was already loaded
INFO[0002] Module iptable_nat was already loaded
INFO[0002] Module iptable_filter was already loaded
INFO[0002] Runtime image index.docker.io/rancher/rke2-runtime:v1.27.6-rke2r1 bin and charts directories already exist; skipping extract
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-metrics-server.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller-crd.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-csi.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico-crd.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-validation-webhook.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-cloud-provider.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-csi-driver.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-cpi.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-canal.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-cilium.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml to set cluster configuration values
INFO[0002] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-multus.yaml to set cluster configuration values
WARN[0002] SELinux is enabled for rke2 but process is not running in context 'container_runtime_t', rke2-selinux policy may need to be applied
INFO[0002] Logging containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log
INFO[0002] Running containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0003] containerd is now running
INFO[0003] Pulling images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt
INFO[0003] Imported images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt in 10.357724ms
INFO[0003] Pulling images from /var/lib/rancher/rke2/agent/images/etcd-image.txt
INFO[0003] Imported images from /var/lib/rancher/rke2/agent/images/etcd-image.txt in 9.598878ms
INFO[0003] Pulling images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt
INFO[0003] Imported images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt in 9.920201ms
INFO[0003] Pulling images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt
INFO[0003] Imported images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt in 9.924881ms
INFO[0003] Pulling images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt
INFO[0003] Imported images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt in 9.931041ms
INFO[0003] Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=se4.cola23subnet.cola123.oraclevcn.com --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --kubelet-cgroups=/rke2 --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels= --pod-infra-container-image=index.docker.io/rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
INFO[0003] Connecting to proxy                           url="wss://127.0.0.1:9345/v1-rke2/connect"
INFO[0003] Handling backend connection request [se4.cola23subnet.cola123.oraclevcn.com]
INFO[0003] Starting etcd to join cluster with members [se3.cola23subnet.cola123.oraclevcn.com-030161a9=https://10.123.23.3:2380 se4.cola23subnet.cola123.oraclevcn.com-09b03fe4=https://10.123.23.4:2380]
INFO[0004] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0008] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0013] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T20:46:55.346256+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2023-10-18T20:46:55.347156+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
INFO[0018] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0020] Pod for etcd not synced (no current running pod found), retrying
INFO[0023] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0028] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T20:47:10.346587+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
INFO[0030] Failed to test data store connection: context deadline exceeded
{"level":"warn","ts":"2023-10-18T20:47:10.347519+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2023-10-18T20:47:10.347685+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
INFO[0030] Waiting for etcd server to become available
INFO[0030] Waiting for API server to become available
{"level":"warn","ts":"2023-10-18T20:47:13.559516+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
ERRO[0033] Failed to check local etcd status for learner management: context deadline exceeded
INFO[0033] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0038] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0040] Pod for etcd not synced (no current running pod found), retrying
INFO[0043] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T20:47:25.37065+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2023-10-18T20:47:25.380394+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2023-10-18T20:47:28.562879+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a3c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
ERRO[0048] Failed to check local etcd status for learner management: context deadline exceeded
INFO[0048] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0054] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0058] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T20:47:40.380618+0900","logger":"etcd-client","caller":"[email protected]/re

@colaH16
Copy link
Author

colaH16 commented Oct 18, 2023

sudo setenforce 0
sudo /opt/rke2/bin/rke2 server

WARN[0000] not running in CIS mode
INFO[0000] Applying Pod Security Admission Configuration
INFO[0000] Starting rke2 v1.27.6+rke2r1 (5cc9c774d6bf349b9ca3bcfab2a6010554fcffa7)
INFO[0000] Managed etcd cluster bootstrap already complete and initialized
INFO[0000] Reconciling bootstrap data between datastore and disk
INFO[0000] Successfully reconciled with datastore
INFO[0000] Starting etcd for existing cluster member
INFO[0000] start
INFO[0000] schedule, now=2023-10-18T22:53:35+09:00, entry=1, next=2023-10-19T00:00:00+09:00
INFO[0000] Running kube-apiserver --advertise-address=100.97.232.52 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
INFO[0000] Removed kube-apiserver static pod manifest
INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/rke2/server/etc/cloud-config.yaml --cloud-provider=rke2 --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route,-service --kubeconfig=/var/lib/rancher/rke2/server/cred/cloud-controller.kubeconfig --leader-elect-resource-name=rke2-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false
INFO[0000] Server node token is available at /var/lib/rancher/rke2/server/token
INFO[0000] Waiting for cri connection: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory"
INFO[0000] To join server node to cluster: rke2 server -s https://10.123.23.4:9345 -t ${SERVER_NODE_TOKEN}
INFO[0000] Agent node token is available at /var/lib/rancher/rke2/server/agent-token
INFO[0000] To join agent node to cluster: rke2 agent -s https://10.123.23.4:9345 -t ${AGENT_NODE_TOKEN}
INFO[0000] Wrote kubeconfig /etc/rancher/rke2/rke2.yaml
INFO[0000] Run: rke2 kubectl
INFO[0001] Password verified locally for node se4.cola23subnet.cola123.oraclevcn.com
INFO[0001] certificate CN=se4.cola23subnet.cola123.oraclevcn.com signed by CN=rke2-server-ca@1697623433: notBefore=2023-10-18 10:03:53 +0000 UTC notAfter=2024-10-17 13:53:37 +0000 UTC
INFO[0002] certificate CN=system:node:se4.cola23subnet.cola123.oraclevcn.com,O=system:nodes signed by CN=rke2-client-ca@1697623433: notBefore=2023-10-18 10:03:53 +0000 UTC notAfter=2024-10-17 13:53:38 +0000 UTC
INFO[0003] Module overlay was already loaded
INFO[0003] Module nf_conntrack was already loaded
INFO[0003] Module br_netfilter was already loaded
INFO[0003] Module iptable_nat was already loaded
INFO[0003] Module iptable_filter was already loaded
INFO[0003] Runtime image index.docker.io/rancher/rke2-runtime:v1.27.6-rke2r1 bin and charts directories already exist; skipping extract
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-cloud-provider.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico-crd.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-csi-driver.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-cpi.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-csi.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller-crd.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-canal.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-metrics-server.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-validation-webhook.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-cilium.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-multus.yaml to set cluster configuration values
INFO[0003] Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller.yaml to set cluster configuration values
WARN[0003] SELinux is enabled on this host, but rke2 has not been started with --selinux - containerd SELinux support is disabled
INFO[0003] Logging containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log
INFO[0003] Running containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
INFO[0004] containerd is now running
INFO[0004] Pulling images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt
INFO[0004] Imported images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt in 10.659527ms
INFO[0004] Pulling images from /var/lib/rancher/rke2/agent/images/etcd-image.txt
INFO[0004] Imported images from /var/lib/rancher/rke2/agent/images/etcd-image.txt in 9.321635ms
INFO[0004] Pulling images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt
INFO[0004] Imported images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt in 9.88512ms
INFO[0004] Pulling images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt
INFO[0004] Imported images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt in 9.930321ms
INFO[0004] Pulling images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt
INFO[0004] Imported images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt in 9.9342ms
INFO[0004] Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=se4.cola23subnet.cola123.oraclevcn.com --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels= --pod-infra-container-image=index.docker.io/rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
INFO[0004] Connecting to proxy                           url="wss://127.0.0.1:9345/v1-rke2/connect"
INFO[0004] Handling backend connection request [se4.cola23subnet.cola123.oraclevcn.com]
INFO[0005] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0009] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0014] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T22:53:50.900468+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"info","ts":"2023-10-18T22:53:50.90076+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
INFO[0019] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0021] Pod for etcd not synced (waiting for termination of old pod), retrying
INFO[0024] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0029] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T22:54:05.900918+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: authentication handshake failed: context deadline exceeded\""}
{"level":"info","ts":"2023-10-18T22:54:05.902474+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2023-10-18T22:54:05.920088+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: authentication handshake failed: context deadline exceeded\""}
INFO[0030] Failed to test data store connection: context deadline exceeded
INFO[0030] Waiting for etcd server to become available
INFO[0030] Waiting for API server to become available
{"level":"warn","ts":"2023-10-18T22:54:10.153726+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
ERRO[0034] Failed to check local etcd status for learner management: context deadline exceeded
INFO[0034] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0039] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0040] Pod for etcd is synced
INFO[0044] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T22:54:20.903005+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: authentication handshake failed: context deadline exceeded\""}
{"level":"info","ts":"2023-10-18T22:54:20.904009+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2023-10-18T22:54:25.154134+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
ERRO[0049] Failed to check local etcd status for learner management: context deadline exceeded
INFO[0049] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0054] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
INFO[0059] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error
{"level":"warn","ts":"2023-10-18T22:54:35.904833+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: authentication handshake failed: context deadline exceeded\""}
{"level":"info","ts":"2023-10-18T22:54:35.905109+0900","logger":"etcd-client","caller":"[email protected]/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
INFO[0060] Waiting for etcd server to become available
INFO[0060] Waiting for API server to become available
{"level":"warn","ts":"2023-10-18T22:54:40.155095+0900","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000a40000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
ERRO[0064] Failed to check local etcd status for learner management: context deadline exceeded
INFO[0064] Removed kube-proxy static pod manifest
INFO[0064] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:9345/v1-rke2/readyz: 500 Internal Server Error

@colaH16
Copy link
Author

colaH16 commented Oct 18, 2023

containerd logs and kubelet logs.
sudo cat /var/lib/rancher/rke2/agent/containerd/containerd.log

time="2023-10-18T23:38:02.172828846+09:00" level=info msg="starting containerd" revision=383ce4e834e4d2ae5e1869475379e70618bdcc33 version=v1.7.3-k3s1
time="2023-10-18T23:38:02.270279559+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.297915664+09:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /usr/lib/modules/6.5.6-1-default\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.297948584+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.298763230+09:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2023-10-18T23:38:02.298799711+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.298832071+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.299080033+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.299149914+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.stargz\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.301779455+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.301898976+09:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2023-10-18T23:38:02.301913976+09:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.302923704+09:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-18T23:38:02.304851440+09:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2023-10-18T23:38:02.304970201+09:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2023-10-18T23:38:02.304988041+09:00" level=info msg="metadata content store policy set" policy=shared
time="2023-10-18T23:38:02.305619286+09:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2023-10-18T23:38:02.305652326+09:00" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2023-10-18T23:38:02.305731407+09:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2023-10-18T23:38:02.305769127+09:00" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
time="2023-10-18T23:38:02.305877768+09:00" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
time="2023-10-18T23:38:02.305906689+09:00" level=info msg="NRI interface is disabled by configuration."
time="2023-10-18T23:38:02.305924809+09:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2023-10-18T23:38:02.552361253+09:00" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
time="2023-10-18T23:38:02.552401094+09:00" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
time="2023-10-18T23:38:02.552423134+09:00" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
time="2023-10-18T23:38:02.552438734+09:00" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
time="2023-10-18T23:38:02.552454374+09:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552469974+09:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552482774+09:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552495134+09:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552509054+09:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552521615+09:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552532695+09:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552545135+09:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2023-10-18T23:38:02.552605735+09:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2023-10-18T23:38:02.552878257+09:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2023-10-18T23:38:02.552913258+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.552927338+09:00" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
time="2023-10-18T23:38:02.552952498+09:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2023-10-18T23:38:02.553002218+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553014379+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553026219+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553037379+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553055699+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553067659+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553079019+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553089699+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553104779+09:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2023-10-18T23:38:02.553148300+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553161340+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553173100+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553184620+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553196260+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553210780+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553222100+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553232340+09:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2023-10-18T23:38:02.553491982+09:00" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:index.docker.io/rancher/mirrored-pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:true EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:1m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/rancher/rke2/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/rke2/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
time="2023-10-18T23:38:02.553544663+09:00" level=info msg="Connect containerd service"
time="2023-10-18T23:38:02.553572823+09:00" level=info msg="using legacy CRI server"
time="2023-10-18T23:38:02.553578983+09:00" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
time="2023-10-18T23:38:02.553625623+09:00" level=info msg="Get image filesystem path \"/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2023-10-18T23:38:02.554822273+09:00" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
time="2023-10-18T23:38:02.554884634+09:00" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2023-10-18T23:38:02.554912634+09:00" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
time="2023-10-18T23:38:02.554923954+09:00" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2023-10-18T23:38:02.554937554+09:00" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
time="2023-10-18T23:38:02.555271037+09:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
time="2023-10-18T23:38:02.555325437+09:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2023-10-18T23:38:02.555542719+09:00" level=info msg="Start subscribing containerd event"
time="2023-10-18T23:38:02.555577919+09:00" level=info msg="Start recovering state"
time="2023-10-18T23:38:02.614281717+09:00" level=info msg="Start event monitor"
time="2023-10-18T23:38:02.614300357+09:00" level=info msg="Start snapshots syncer"
time="2023-10-18T23:38:02.614308477+09:00" level=info msg="Start cni network conf syncer for default"
time="2023-10-18T23:38:02.614316157+09:00" level=info msg="Start streaming server"
time="2023-10-18T23:38:02.614332717+09:00" level=info msg="containerd successfully booted in 0.442164s"

cat ./agent/logs/kubelet.log

Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --sync-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --eviction-minimum-reclaim has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1018 23:44:00.773327   43725 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I1018 23:44:00.793102   43725 server.go:415] "Kubelet version" kubeletVersion="v1.27.6+rke2r1"
I1018 23:44:00.793123   43725 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
W1018 23:44:00.804152   43725 machine.go:65] Cannot read vendor id correctly, set empty.
I1018 23:44:00.804645   43725 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I1018 23:44:00.807798   43725 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/rke2/agent/client-ca.crt"
I1018 23:44:00.808140   43725 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1018 23:44:00.808199   43725 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I1018 23:44:00.808226   43725 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1018 23:44:00.808236   43725 container_manager_linux.go:301] "Creating device plugin manager"
I1018 23:44:00.808270   43725 state_mem.go:36] "Initialized new in-memory state store"
I1018 23:44:00.837679   43725 kubelet.go:405] "Attempting to sync node with API server"
I1018 23:44:00.837699   43725 kubelet.go:298] "Adding static pod path" path="/var/lib/rancher/rke2/agent/pod-manifests"
I1018 23:44:00.837720   43725 kubelet.go:309] "Adding apiserver pod source"
I1018 23:44:00.837742   43725 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I1018 23:44:00.846466   43725 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.3-k3s1" apiVersion="v1"
I1018 23:44:00.847050   43725 server.go:1168] "Started kubelet"
W1018 23:44:00.847143   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:00.848141   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:00.848211   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:00.848237   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
I1018 23:44:00.849189   43725 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
I1018 23:44:00.849786   43725 server.go:461] "Adding debug handlers to kubelet server"
I1018 23:44:00.850533   43725 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
I1018 23:44:00.855696   43725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I1018 23:44:00.888705   43725 volume_manager.go:284] "Starting Kubelet Volume Manager"
I1018 23:44:00.889222   43725 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
E1018 23:44:00.899075   43725 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"se4.cola23subnet.cola123.oraclevcn.com.178f3acdf818865f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"se4.cola23subnet.cola123.oraclevcn.com", UID:"se4.cola23subnet.cola123.oraclevcn.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"se4.cola23subnet.cola123.oraclevcn.com"}, FirstTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), LastTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"se4.cola23subnet.cola123.oraclevcn.com"}': 'Post "https://127.0.0.1:6443/api/v1/namespaces/default/events": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping)
W1018 23:44:00.911972   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:00.912013   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:00.912116   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="200ms"
E1018 23:44:00.912760   43725 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E1018 23:44:00.912777   43725 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I1018 23:44:00.974048   43725 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
I1018 23:44:00.993435   43725 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
I1018 23:44:00.993451   43725 status_manager.go:207] "Starting to sync pod status with apiserver"
I1018 23:44:00.993467   43725 kubelet.go:2257] "Starting kubelet main sync loop"
E1018 23:44:00.993608   43725 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
W1018 23:44:01.001140   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:01.001193   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
I1018 23:44:01.012586   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:01.013618   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.061154   43725 cpu_manager.go:214] "Starting CPU manager" policy="none"
I1018 23:44:01.061168   43725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
I1018 23:44:01.061185   43725 state_mem.go:36] "Initialized new in-memory state store"
I1018 23:44:01.061971   43725 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I1018 23:44:01.061986   43725 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
I1018 23:44:01.061992   43725 policy_none.go:49] "None policy: Start"
I1018 23:44:01.063815   43725 memory_manager.go:169] "Starting memorymanager" policy="None"
I1018 23:44:01.063841   43725 state_mem.go:35] "Initializing new in-memory state store"
I1018 23:44:01.064151   43725 state_mem.go:75] "Updated machine memory state"
I1018 23:44:01.067586   43725 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I1018 23:44:01.068790   43725 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
E1018 23:44:01.069554   43725 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"se4.cola23subnet.cola123.oraclevcn.com\" not found"
I1018 23:44:01.094297   43725 topology_manager.go:212] "Topology Admit Handler" podUID=be6e862c49eca6444ee8da5f91b3ca5b podNamespace="kube-system" podName="etcd-se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:01.112663   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="400ms"
I1018 23:44:01.192567   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file4\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file4\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192608   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file5\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file5\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192626   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file6\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file6\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192646   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dir0\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-dir0\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192672   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file0\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file0\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192696   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file1\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file1\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192719   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file2\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file2\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.192738   43725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file3\" (UniqueName: \"kubernetes.io/host-path/be6e862c49eca6444ee8da5f91b3ca5b-file3\") pod \"etcd-se4.cola23subnet.cola123.oraclevcn.com\" (UID: \"be6e862c49eca6444ee8da5f91b3ca5b\") " pod="kube-system/etcd-se4.cola23subnet.cola123.oraclevcn.com"
I1018 23:44:01.216506   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:01.216766   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:01.513690   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="800ms"
I1018 23:44:01.619894   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:01.620642   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
W1018 23:44:01.749463   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:01.749527   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:01.800944   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:01.800991   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:02.140565   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:02.140621   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:02.314772   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="1.6s"
I1018 23:44:02.423877   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:02.424201   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
W1018 23:44:02.451634   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:02.451660   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:03.535643   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:03.535676   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:03.915841   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="3.2s"
I1018 23:44:04.027765   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:04.028143   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
W1018 23:44:04.441714   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:04.441748   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:04.631540   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:04.631572   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:05.302121   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:05.302152   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:07.116786   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="6.4s"
I1018 23:44:07.231135   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:07.231482   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:08.554169   43725 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"se4.cola23subnet.cola123.oraclevcn.com.178f3acdf818865f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"se4.cola23subnet.cola123.oraclevcn.com", UID:"se4.cola23subnet.cola123.oraclevcn.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"se4.cola23subnet.cola123.oraclevcn.com"}, FirstTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), LastTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"se4.cola23subnet.cola123.oraclevcn.com"}': 'Post "https://127.0.0.1:6443/api/v1/namespaces/default/events": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping)
W1018 23:44:09.596561   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:09.596604   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:09.856450   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:09.856483   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:10.528362   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:10.528411   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:11.071288   43725 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"se4.cola23subnet.cola123.oraclevcn.com\" not found"
W1018 23:44:11.420370   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:11.420419   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:13.518306   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="7s"
I1018 23:44:13.636656   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:13.637206   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:18.556874   43725 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"se4.cola23subnet.cola123.oraclevcn.com.178f3acdf818865f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"se4.cola23subnet.cola123.oraclevcn.com", UID:"se4.cola23subnet.cola123.oraclevcn.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"se4.cola23subnet.cola123.oraclevcn.com"}, FirstTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), LastTimestamp:time.Date(2023, time.October, 18, 23, 44, 0, 847029855, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"se4.cola23subnet.cola123.oraclevcn.com"}': 'Post "https://127.0.0.1:6443/api/v1/namespaces/default/events": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping)
W1018 23:44:19.749483   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:19.749512   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W1018 23:44:19.929345   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:19.929376   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:20.519108   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="7s"
I1018 23:44:20.640509   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:20.640789   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"
W1018 23:44:20.982152   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:20.982238   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dse4.cola23subnet.cola123.oraclevcn.com&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:21.071957   43725 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"se4.cola23subnet.cola123.oraclevcn.com\" not found"
W1018 23:44:21.532255   43725 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:21.532307   43725 reflector.go:148] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E1018 23:44:27.520212   43725 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/se4.cola23subnet.cola123.oraclevcn.com?timeout=10s\": dial tcp 127.0.0.1:6443: connect: connection refused" interval="7s"
I1018 23:44:27.645560   43725 kubelet_node_status.go:70] "Attempting to register node" node="se4.cola23subnet.cola123.oraclevcn.com"
E1018 23:44:27.645909   43725 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://127.0.0.1:6443/api/v1/nodes\": dial tcp 127.0.0.1:6443: connect: connection refused" node="se4.cola23subnet.cola123.oraclevcn.com"

@colaH16
Copy link
Author

colaH16 commented Oct 19, 2023

I think it's network issue.

@colaH16
Copy link
Author

colaH16 commented Oct 20, 2023

The VPN seems to be causing network issues.
Sorry about that.

@colaH16 colaH16 closed this as completed Oct 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant