Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Core DNS missing NodeHosts key in Configmap #9274

Closed
safderali5 opened this issue Jan 19, 2024 · 35 comments
Closed

Core DNS missing NodeHosts key in Configmap #9274

safderali5 opened this issue Jan 19, 2024 · 35 comments
Assignees
Milestone

Comments

@safderali5
Copy link

When deploying latest k3s version v1.29.0+k3s1 the coredns pod is stuck in ContainerCreating stage as it cannot find the key NodeHosts in configmap coredns. When looking at the manifests definitions, it looks the problem is real.

Upon searching k3s documentation there is no mention of how to specify those NodeHosts without manually editing configmap after deployment.

Error:
Warning FailedMount 85s (x19 over 23m) kubelet MountVolume.SetUp failed for volume "config-volume" : configmap references non-existent config key: NodeHosts

Manifests Link:
https://github.com/k3s-io/k3s/blob/master/manifests/coredns.yaml

@brandond
Copy link
Member

brandond commented Jan 19, 2024

The NodeHosts key is maintained by a controller that runs in the k3s server process. It ensures that the key exists, and contains a hosts file entry for every node in the cluster.

Confirm that you see nodes listed in kubectl get node -o wide, and that there are no errors in the k3s journald log that might indicate why it's failing to update the configmap.

If you don't find anything useful, please attach the output of kubectl get node -o yaml and the complete k3s journald log from your servers.

@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Jan 19, 2024
@safderali5
Copy link
Author

safderali5 commented Jan 23, 2024

If the k3s is uninstalled and installed couple of times, The problem start to appear and persist after that. In the journal logs I can see that
Jan 23 15:00:59 vm-165 k3s[699204]: time="2024-01-23T15:00:59+02:00" level=warning msg="Unable to fetch coredns config map: configmaps "coredns" not found"

K3s nodes YAML:

apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 10.54.65.165
      etcd.k3s.cattle.io/node-address: 10.54.65.165
      etcd.k3s.cattle.io/node-name: vm-165.tests.lab.net-03b3f407
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"3e:40:74:1b:49:7e"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 10.54.65.165
      k3s.io/encryption-config-hash: start-f34e200ca9e32f01069b791f6654af685af5487c329e8935fc45eda80502e68d
      k3s.io/external-ip: 10.54.65.165
      k3s.io/hostname: vm-165.tests.lab.net
      k3s.io/internal-ip: 10.54.65.165
      k3s.io/node-args: '["server","--cluster-init","true","--disable","traefik","--default-local-storage-path","/var/lib/rancher/storage","--disable-helm-controller","true","--https-listen-port","6443","--kube-apiserver-arg","audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log","--kube-apiserver-arg","audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml","--kube-apiserver-arg","audit-log-maxage=30","--kube-apiserver-arg","audit-log-maxbackup=10","--kube-apiserver-arg","audit-log-maxsize=100","--kube-apiserver-arg","request-timeout=300s","--kube-apiserver-arg","service-account-lookup=true","--kube-apiserver-arg","anonymous-auth=false","--kube-cloud-controller-manager-arg","webhook-secure-port=0","--kube-controller-manager-arg","terminated-pod-gc-threshold=12500","--kube-controller-manager-arg","use-service-account-credentials=true","--kubelet-arg","container-log-max-files=4","--kubelet-arg","container-log-max-size=50Mi","--kubelet-arg","streaming-connection-idle-timeout=5m","--kubelet-arg","make-iptables-util-chains=true","--node-external-ip","10.54.65.165","--node-name","vm-165.tests.lab.net","--prefer-bundled-bin","true","--protect-kernel-defaults","true","--secrets-encryption","true","--tls-san","10.54.65.165","--write-kubeconfig-mode","0600"]'
      k3s.io/node-config-hash: A34B3T7OUQMIDGJXFI3KAYFMLFUNHUG7VU5PZZMBY5NBN3M4YQHQ====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/28f7e87eba734b7f7731dc900e2c84e0e98ce869f3dcf57f65dc7bbb80e12e56","K3S_INTERNAL_CERTS_EXPIRATION_DAYS":"730","K3S_UPGRADE":"false"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    finalizers:
    - wrangler.cattle.io/node
    - wrangler.cattle.io/managed-etcd-controller
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: vm-165.tests.lab.net
      kubernetes.io/os: linux
      node-role.kubernetes.io/control-plane: "true"
      node-role.kubernetes.io/etcd: "true"
      node-role.kubernetes.io/master: "true"
      node.kubernetes.io/instance-type: k3s
    name: vm-165.tests.lab.net
    resourceVersion: "10767"
    uid: 4f4463ab-3e4e-44dc-a684-7af8d16920f7
  spec:
    podCIDR: 10.42.0.0/24
    podCIDRs:
    - 10.42.0.0/24
    providerID: k3s://vm-165.tests.lab.net
  status:
    addresses:
    - address: 10.54.65.165
      type: InternalIP
    - address: 10.54.65.165
      type: ExternalIP
    - address: vm-165.tests.lab.net
      type: Hostname
    allocatable:
      cpu: "16"
      ephemeral-storage: "202891775022"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32649356Ki
      pods: "110"
    capacity:
      cpu: "16"
      ephemeral-storage: 208564736Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32649356Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-01-23T14:54:11Z"
      lastTransitionTime: "2024-01-23T13:52:55Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-01-23T14:54:11Z"
      lastTransitionTime: "2024-01-23T13:52:55Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-01-23T14:54:11Z"
      lastTransitionTime: "2024-01-23T13:52:55Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-01-23T14:54:11Z"
      lastTransitionTime: "2024-01-23T13:52:55Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: "2024-01-23T13:53:10Z"
      lastTransitionTime: "2024-01-23T13:53:10Z"
      message: Node is a voting member of the etcd cluster
      reason: MemberNotLearner
      status: "True"
      type: EtcdIsVoter
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 256386482
    - names:
      - docker.io/rancher/mirrored-library-traefik:2.10.5
      sizeBytes: 152810877
    - names:
      - docker.io/rancher/mirrored-metrics-server:v0.6.3
      sizeBytes: 70293467
    - names:
      - docker.io/rancher/mirrored-coredns-coredns:1.10.1
      sizeBytes: 53618774
    - names:
      - docker.io/rancher/local-path-provisioner:v0.0.24
      sizeBytes: 40448904
    - names:
      - docker.io/rancher/klipper-lb:v0.4.4
      sizeBytes: 12479235
    - names:
      - docker.io/rancher/mirrored-library-busybox:1.36.1
      sizeBytes: 4494167
    - names:
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 685866
    nodeInfo:
      architecture: amd64
      bootID: 9b5cc646-1840-4e2b-a1b2-966a2cf6ee15
      containerRuntimeVersion: containerd://1.7.11-k3s2
      kernelVersion: 4.18.0-477.10.1.el8_8.x86_64
      kubeProxyVersion: v1.28.5+k3s1
      kubeletVersion: v1.28.5+k3s1
      machineID: dbe31cb559074f648b289837ab12412f
      operatingSystem: linux
      osImage: Red Hat Enterprise Linux 8.8 (Ootpa)
      systemUUID: dbe31cb5-5907-4f64-8b28-9837ab12412f
kind: List
metadata:
  resourceVersion: ""

K3s journald logs:

-- Logs begin at Mon 2024-01-22 12:57:23 EET, end at Tue 2024-01-23 15:09:10 EET. --
Jan 23 15:00:18 vm-165 systemd[1]: Starting Lightweight Kubernetes...
Jan 23 15:00:18 vm-165 sh[699197]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Jan 23 15:00:18 vm-165 sh[699198]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Jan 23 15:00:18 vm-165 k3s[699204]: time="2024-01-23T15:00:18+02:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Jan 23 15:00:18 vm-165 k3s[699204]: time="2024-01-23T15:00:18+02:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/28f7e87eba734b7f7731dc900e2c84e0e98ce869f3dcf57f65dc7bbb80e12e56"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Starting k3s v1.28.5+k3s1 (5b2d1271)"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Managed etcd cluster initializing"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22.01689069 +0000 UTC notAfter=2034-01-20 13:00:22.01689069 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:k3s-supervisor,O=system:masters signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1706014822: notBefore=2024-01-23 13:00:22.032144773 +0000 UTC notAfter=2034-01-20 13:00:22.032144773 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1706014822: notBefore=2024-01-23 13:00:22.034593054 +0000 UTC notAfter=2034-01-20 13:00:22.034593054 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1706014822: notBefore=2024-01-23 13:00:22.03658884 +0000 UTC notAfter=2034-01-20 13:00:22.03658884 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1706014822: notBefore=2024-01-23 13:00:22.038514923 +0000 UTC notAfter=2034-01-20 13:00:22.038514923 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Starting etcd for new cluster"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg=start
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="schedule, now=2024-01-23T15:00:22+02:00, entry=1, next=2024-01-24T00:00:00+02:00"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:22 +0000 UTC"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.54.65.165:10.54.65.165 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-vm-165.tests.lab.net:vm-165.tests.lab.net listener.cattle.io/fingerprint:SHA1=AEC8AF7D9D5044693ACDE9C5FD93E706E731CB87]"
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.357699+0200","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://10.54.65.165:2380","https://127.0.0.1:2380"]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.35829+0200","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.359646+0200","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://10.54.65.165:2379","https://127.0.0.1:2379"]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.360089+0200","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.20.12","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"vm-165.tests.lab.net-41f54d96","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.54.65.165:2380"],"listen-peer-urls":["https://10.54.65.165:2380","https://127.0.0.1:2380"],"advertise-client-urls":["https://10.54.65.165:2379"],"listen-client-urls":["https://10.54.65.165:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"vm-165.tests.lab.net-41f54d96=https://10.54.65.165:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.366472+0200","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"5.685971ms"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375332+0200","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"36642c749447ca1f","cluster-id":"c8d379020fe42403"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375541+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f switched to configuration voters=()"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375604+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f became follower at term 0"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375673+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 36642c749447ca1f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375695+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f became follower at term 1"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.375836+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f switched to configuration voters=(3919306454934735391)"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"warn","ts":"2024-01-23T15:00:22.38401+0200","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.387933+0200","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.391921+0200","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.396513+0200","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"36642c749447ca1f","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.397083+0200","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"36642c749447ca1f","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.397451+0200","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.397721+0200","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.397757+0200","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.400035+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f switched to configuration voters=(3919306454934735391)"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.400321+0200","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c8d379020fe42403","local-member-id":"36642c749447ca1f","added-peer-id":"36642c749447ca1f","added-peer-peer-urls":["https://10.54.65.165:2380"]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.402688+0200","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.403099+0200","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"36642c749447ca1f","initial-advertise-peer-urls":["https://10.54.65.165:2380"],"listen-peer-urls":["https://10.54.65.165:2380","https://127.0.0.1:2380"],"advertise-client-urls":["https://10.54.65.165:2379"],"listen-client-urls":["https://10.54.65.165:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.403097+0200","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"10.54.65.165:2380"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.403303+0200","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"10.54.65.165:2380"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.403421+0200","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.403647+0200","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"127.0.0.1:2380"}
Jan 23 15:00:22 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:22.407949+0200","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"127.0.0.1:2380"}
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Running kube-apiserver --advertise-address=10.54.65.165 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=12500 --use-service-account-credentials=true"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --feature-gates=CloudDualStackNodeIPs=true --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false --webhook-secure-port=0"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="To join server node to cluster: k3s server -s https://10.54.65.165:6443 -t ${SERVER_NODE_TOKEN}"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="To join agent node to cluster: k3s agent -s https://10.54.65.165:6443 -t ${AGENT_NODE_TOKEN}"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jan 23 15:00:22 vm-165 k3s[699204]: time="2024-01-23T15:00:22+02:00" level=info msg="Run: k3s kubectl"
Jan 23 15:00:23 vm-165 k3s[699204]: time="2024-01-23T15:00:23+02:00" level=info msg="Password verified locally for node vm-165.tests.lab.net"
Jan 23 15:00:23 vm-165 k3s[699204]: time="2024-01-23T15:00:23+02:00" level=info msg="certificate CN=vm-165.tests.lab.net signed by CN=k3s-server-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:23 +0000 UTC"
Jan 23 15:00:24 vm-165 k3s[699204]: time="2024-01-23T15:00:24+02:00" level=info msg="certificate CN=system:node:vm-165.tests.lab.net,O=system:nodes signed by CN=k3s-client-ca@1706014822: notBefore=2024-01-23 13:00:22 +0000 UTC notAfter=2026-01-22 13:00:24 +0000 UTC"
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.376892+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f is starting a new election at term 1"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377026+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f became pre-candidate at term 1"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377105+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f received MsgPreVoteResp from 36642c749447ca1f at term 1"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377146+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f became candidate at term 2"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377162+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f received MsgVoteResp from 36642c749447ca1f at term 2"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377184+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36642c749447ca1f became leader at term 2"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.377203+0200","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36642c749447ca1f elected leader 36642c749447ca1f at term 2"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.379852+0200","caller":"etcdserver/server.go:2064","msg":"published local member to cluster through raft","local-member-id":"36642c749447ca1f","local-member-attributes":"{Name:vm-165.tests.lab.net-41f54d96 ClientURLs:[https://10.54.65.165:2379]}","request-path":"/0/members/36642c749447ca1f/attributes","cluster-id":"c8d379020fe42403","publish-timeout":"15s"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.37994+0200","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.380222+0200","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.380815+0200","caller":"etcdserver/server.go:2573","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.381211+0200","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"http","address":"127.0.0.1:2382"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.381203+0200","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.382855+0200","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c8d379020fe42403","local-member-id":"36642c749447ca1f","cluster-version":"3.5"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.38309+0200","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.383178+0200","caller":"etcdserver/server.go:2597","msg":"cluster version is updated","cluster-version":"3.5"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.383938+0200","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc","address":"127.0.0.1:2379"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.383965+0200","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc","address":"10.54.65.165:2379"}
Jan 23 15:00:24 vm-165 k3s[699204]: time="2024-01-23T15:00:24+02:00" level=info msg="Defragmenting etcd database"
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.404362+0200","caller":"v3rpc/maintenance.go:90","msg":"starting defragment"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.411955+0200","caller":"backend/backend.go:497","msg":"defragmenting","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","current-db-size-bytes":20480,"current-db-size":"20 kB","current-db-size-in-use-bytes":16384,"current-db-size-in-use":"16 kB"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.420171+0200","caller":"backend/backend.go:549","msg":"finished defragmenting directory","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","current-db-size-bytes-diff":0,"current-db-size-bytes":20480,"current-db-size":"20 kB","current-db-size-in-use-bytes-diff":-4096,"current-db-size-in-use-bytes":12288,"current-db-size-in-use":"12 kB","took":"15.662725ms"}
Jan 23 15:00:24 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:24.420311+0200","caller":"v3rpc/maintenance.go:96","msg":"finished defragment"}
Jan 23 15:00:24 vm-165 k3s[699204]: time="2024-01-23T15:00:24+02:00" level=info msg="etcd data store connection OK"
Jan 23 15:00:24 vm-165 k3s[699204]: time="2024-01-23T15:00:24+02:00" level=info msg="Saving cluster bootstrap data to datastore"
Jan 23 15:00:24 vm-165 k3s[699204]: W0123 15:00:24.437145  699204 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
Jan 23 15:00:24 vm-165 k3s[699204]: time="2024-01-23T15:00:24+02:00" level=info msg="Waiting for API server to become available"
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.437406  699204 options.go:220] external host was not specified, using 10.54.65.165
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.438360  699204 server.go:156] Version: v1.28.5+k3s1
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.438392  699204 server.go:158] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.716355  699204 shared_informer.go:311] Waiting for caches to sync for node_authorizer
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.732948  699204 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.733969  699204 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.735470  699204 instance.go:298] Using reconciler: lease
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.766488  699204 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
Jan 23 15:00:24 vm-165 k3s[699204]: W0123 15:00:24.766540  699204 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.972042  699204 handler.go:275] Adding GroupVersion  v1 to ResourceManager
Jan 23 15:00:24 vm-165 k3s[699204]: I0123 15:00:24.972517  699204 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Module overlay was already loaded"
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Module nf_conntrack was already loaded"
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Module br_netfilter was already loaded"
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Module iptable_nat was already loaded"
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Module iptable_filter was already loaded"
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.280770  699204 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.300140  699204 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.300288  699204 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.300322  699204 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=warning msg="SELinux is enabled on this host, but k3s has not been started with --selinux - containerd SELinux support is disabled"
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.303306  699204 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.303352  699204 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.309021  699204 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Jan 23 15:00:25 vm-165 k3s[699204]: time="2024-01-23T15:00:25+02:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.310934  699204 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.310966  699204 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.310983  699204 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.316589  699204 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.316662  699204 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.318713  699204 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.318758  699204 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.318769  699204 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.319771  699204 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.319798  699204 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.319873  699204 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.320846  699204 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.323477  699204 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.323518  699204 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.323529  699204 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.324662  699204 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.324703  699204 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.324715  699204 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.326044  699204 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.326074  699204 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.329333  699204 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.329369  699204 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.329379  699204 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.330431  699204 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.330500  699204 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.330518  699204 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.335454  699204 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.335510  699204 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.335524  699204 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.338135  699204 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.340570  699204 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.340617  699204 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.340629  699204 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.347616  699204 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.347666  699204 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.347678  699204 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.349313  699204 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.349346  699204 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.349357  699204 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.350190  699204 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.350219  699204 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:25 vm-165 k3s[699204]: I0123 15:00:25.361915  699204 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
Jan 23 15:00:25 vm-165 k3s[699204]: W0123 15:00:25.361974  699204 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.055032  699204 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.055057  699204 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.055281  699204 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056148  699204 secure_serving.go:213] Serving securely on 127.0.0.1:6444
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056276  699204 apiservice_controller.go:97] Starting APIServiceRegistrationController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056304  699204 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056319  699204 controller.go:78] Starting OpenAPI AggregationController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056362  699204 apf_controller.go:368] Starting API Priority and Fairness config controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056446  699204 customresource_discovery_controller.go:289] Starting DiscoveryController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056524  699204 controller.go:134] Starting OpenAPI controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056541  699204 handler_discovery.go:412] Starting ResourceDiscoveryManager
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056569  699204 controller.go:85] Starting OpenAPI V3 controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056603  699204 naming_controller.go:291] Starting NamingConditionController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056627  699204 establishing_controller.go:76] Starting EstablishingController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056651  699204 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056673  699204 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056707  699204 crd_finalizer.go:266] Starting CRDFinalizer
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056773  699204 controller.go:80] Starting OpenAPI V3 AggregationController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.057018  699204 gc_controller.go:78] Starting apiserver lease garbage collector
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.057089  699204 controller.go:78] "Starting controller" name="encryption-provider-config-automatic-reload-controller"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.056361  699204 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.057471  699204 available_controller.go:423] Starting AvailableConditionController
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.057503  699204 system_namespaces_controller.go:67] Starting system namespaces controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.057515  699204 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.058807  699204 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.058856  699204 crdregistration_controller.go:111] Starting crd-autoregister controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.058936  699204 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.059049  699204 aggregator.go:164] waiting for initial CRD sync...
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.059092  699204 controller.go:116] Starting legacy_token_tracking_controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.059127  699204 shared_informer.go:311] Waiting for caches to sync for configmaps
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.060458  699204 gc_controller.go:78] Starting apiserver lease garbage collector
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.061269  699204 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.061307  699204 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.062819  699204 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.063335  699204 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.116589  699204 shared_informer.go:318] Caches are synced for node_authorizer
Jan 23 15:00:26 vm-165 k3s[699204]: E0123 15:00:26.144271  699204 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.156454  699204 apf_controller.go:373] Running API Priority and Fairness config worker
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.156553  699204 apf_controller.go:376] Running API Priority and Fairness periodic rebalancing process
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.156869  699204 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.158574  699204 cache.go:39] Caches are synced for AvailableConditionController controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159045  699204 shared_informer.go:318] Caches are synced for crd-autoregister
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159124  699204 aggregator.go:166] initial CRD sync complete...
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159137  699204 autoregister_controller.go:141] Starting autoregister controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159146  699204 cache.go:32] Waiting for caches to sync for autoregister controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159158  699204 cache.go:39] Caches are synced for autoregister controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.159198  699204 shared_informer.go:318] Caches are synced for configmaps
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.160586  699204 controller.go:624] quota admission added evaluator for: namespaces
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.161646  699204 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.227390  699204 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.43.0.1"}
Jan 23 15:00:26 vm-165 k3s[699204]: W0123 15:00:26.244573  699204 lease.go:263] Resetting endpoints for master service "kubernetes" to [10.54.65.165]
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.247031  699204 controller.go:624] quota admission added evaluator for: endpoints
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.256682  699204 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
Jan 23 15:00:26 vm-165 k3s[699204]: E0123 15:00:26.271473  699204 controller.go:102] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
Jan 23 15:00:26 vm-165 k3s[699204]: time="2024-01-23T15:00:26+02:00" level=info msg="containerd is now running"
Jan 23 15:00:26 vm-165 k3s[699204]: time="2024-01-23T15:00:26+02:00" level=info msg="Importing images from /var/lib/rancher/k3s/agent/images/k3s-airgap-images-amd64.tar.gz"
Jan 23 15:00:26 vm-165 k3s[699204]: I0123 15:00:26.350998  699204 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
Jan 23 15:00:27 vm-165 k3s[699204]: I0123 15:00:27.070287  699204 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
Jan 23 15:00:27 vm-165 k3s[699204]: I0123 15:00:27.080915  699204 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
Jan 23 15:00:27 vm-165 k3s[699204]: I0123 15:00:27.080971  699204 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Jan 23 15:00:28 vm-165 k3s[699204]: I0123 15:00:28.182021  699204 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Jan 23 15:00:28 vm-165 k3s[699204]: I0123 15:00:28.278860  699204 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Kube API server is now running"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="ETCD server is now running"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="k3s is up and running"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Waiting for cloud-controller-manager privileges to become available"
Jan 23 15:00:29 vm-165 k3s[699204]: W0123 15:00:29.092783  699204 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
Jan 23 15:00:29 vm-165 systemd[1]: Started Lightweight Kubernetes.
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Creating k3s-supervisor event broadcaster"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Applying CRD addons.k3s.cattle.io"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Applying CRD etcdsnapshotfiles.k3s.cattle.io"
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.157926  699204 handler.go:275] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Waiting for CRD etcdsnapshotfiles.k3s.cattle.io to become available"
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.185925  699204 handler.go:275] Adding GroupVersion k3s.cattle.io v1 to ResourceManager
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Done waiting for CRD etcdsnapshotfiles.k3s.cattle.io to become available"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-25.0.2+up25.0.0.tgz"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-25.0.2+up25.0.0.tgz"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/runtimes.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Reconciling ETCDSnapshotFile resources"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting managed etcd node metadata controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Tunnel server egress proxy mode: agent"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting dynamiclistener CN filter node controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Reconciliation of ETCDSnapshotFile resources complete"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=error msg="Failed to record snapshots for cluster: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Creating deploy event broadcaster"
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.917537  699204 serving.go:355] Generated self-signed cert in-memory
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.918368  699204 leaderelection.go:250] attempting to acquire leader lease kube-system/k3s...
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.918648  699204 leaderelection.go:250] attempting to acquire leader lease kube-system/k3s-etcd...
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting /v1, Kind=Node controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.931636  699204 leaderelection.go:260] successfully acquired lease kube-system/k3s-etcd
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting managed etcd apiserver addresses controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting managed etcd member removal controller"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Starting managed etcd snapshot ConfigMap controller"
Jan 23 15:00:29 vm-165 k3s[699204]: I0123 15:00:29.934032  699204 leaderelection.go:260] successfully acquired lease kube-system/k3s
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Cluster dns configmap has been set successfully"
Jan 23 15:00:29 vm-165 k3s[699204]: time="2024-01-23T15:00:29+02:00" level=info msg="Creating reencrypt-controller event broadcaster"
Jan 23 15:00:30 vm-165 k3s[699204]: time="2024-01-23T15:00:30+02:00" level=info msg="Starting k3s.cattle.io/v1, Kind=ETCDSnapshotFile controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.486529  699204 controllermanager.go:189] "Starting" version="v1.28.5+k3s1"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.486590  699204 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493716  699204 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493791  699204 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493723  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493851  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493745  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.493976  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.495872  699204 secure_serving.go:213] Serving securely on 127.0.0.1:10257
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.495924  699204 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.496546  699204 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager...
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.516335  699204 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.516531  699204 event.go:307] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="vm-165_44c35c80-259f-4e79-84bf-43f33c2ea13e became leader"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.536924  699204 controllermanager.go:607] "Warning: controller is disabled" controller="bootstrap-signer-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.537015  699204 shared_informer.go:311] Waiting for caches to sync for tokens
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.550772  699204 controller.go:624] quota admission added evaluator for: serviceaccounts
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.556165  699204 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.556556  699204 tokencleaner.go:112] "Starting token cleaner controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.556627  699204 shared_informer.go:311] Waiting for caches to sync for token_cleaner
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.556661  699204 shared_informer.go:318] Caches are synced for token_cleaner
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.580740  699204 node_lifecycle_controller.go:431] "Controller will reconcile labels"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.580859  699204 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.580886  699204 controllermanager.go:607] "Warning: controller is disabled" controller="service-lb-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.581045  699204 node_lifecycle_controller.go:465] "Sending events to api server"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.581126  699204 node_lifecycle_controller.go:476] "Starting node controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.581144  699204 shared_informer.go:311] Waiting for caches to sync for taint
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.593953  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.594016  699204 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.594089  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.609841  699204 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.609889  699204 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.609929  699204 graph_builder.go:294] "Running" component="GraphBuilder"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.609909  699204 shared_informer.go:311] Waiting for caches to sync for garbage collector
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.637209  699204 shared_informer.go:318] Caches are synced for tokens
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.651508  699204 controllermanager.go:642] "Started controller" controller="namespace-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.651704  699204 namespace_controller.go:197] "Starting namespace controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.651775  699204 shared_informer.go:311] Waiting for caches to sync for namespace
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.670725  699204 controllermanager.go:642] "Started controller" controller="replicaset-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.670971  699204 replica_set.go:214] "Starting controller" name="replicaset"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.671018  699204 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690269  699204 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690344  699204 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690383  699204 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690848  699204 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690909  699204 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.690967  699204 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.691651  699204 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.691689  699204 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.691770  699204 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.692154  699204 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.692266  699204 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.692315  699204 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.692328  699204 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.710725  699204 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.710976  699204 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.711029  699204 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.727630  699204 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.727810  699204 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.727851  699204 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
Jan 23 15:00:30 vm-165 k3s[699204]: time="2024-01-23T15:00:30+02:00" level=info msg="Starting /v1, Kind=Secret controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.773500  699204 controllermanager.go:642] "Started controller" controller="disruption-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.773655  699204 disruption.go:433] "Sending events to api server."
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.773770  699204 disruption.go:444] "Starting disruption controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.773791  699204 shared_informer.go:311] Waiting for caches to sync for disruption
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.924173  699204 controllermanager.go:642] "Started controller" controller="cronjob-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.924434  699204 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.924494  699204 shared_informer.go:311] Waiting for caches to sync for cronjob
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.972496  699204 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
Jan 23 15:00:30 vm-165 k3s[699204]: I0123 15:00:30.972620  699204 cleaner.go:83] "Starting CSR cleaner controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.126932  699204 controllermanager.go:642] "Started controller" controller="ttl-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.127023  699204 ttl_controller.go:124] "Starting TTL controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.127095  699204 shared_informer.go:311] Waiting for caches to sync for TTL
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.275779  699204 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.276047  699204 pv_controller_base.go:319] "Starting persistent volume controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.276112  699204 shared_informer.go:311] Waiting for caches to sync for persistent volume
Jan 23 15:00:31 vm-165 k3s[699204]: time="2024-01-23T15:00:31+02:00" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.54.65.165:10.54.65.165 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-vm-165.tests.lab.net:vm-165.tests.lab.net listener.cattle.io/fingerprint:SHA1=AEC8AF7D9D5044693ACDE9C5FD93E706E731CB87]"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.426557  699204 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.426771  699204 replica_set.go:214] "Starting controller" name="replicationcontroller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.426827  699204 shared_informer.go:311] Waiting for caches to sync for ReplicationController
Jan 23 15:00:31 vm-165 k3s[699204]: time="2024-01-23T15:00:31+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.576701  699204 controllermanager.go:642] "Started controller" controller="deployment-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.576834  699204 deployment_controller.go:168] "Starting controller" controller="deployment"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.576846  699204 shared_informer.go:311] Waiting for caches to sync for deployment
Jan 23 15:00:31 vm-165 k3s[699204]: time="2024-01-23T15:00:31+02:00" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.54.65.165:10.54.65.165 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-vm-165.tests.lab.net:vm-165.tests.lab.net listener.cattle.io/fingerprint:SHA1=AEC8AF7D9D5044693ACDE9C5FD93E706E731CB87]"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.730070  699204 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.730328  699204 attach_detach_controller.go:337] "Starting attach detach controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.730394  699204 shared_informer.go:311] Waiting for caches to sync for attach detach
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.810931  699204 controller.go:624] quota admission added evaluator for: addons.k3s.cattle.io
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.823520  699204 event.go:307] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.875434  699204 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.875569  699204 pv_protection_controller.go:78] "Starting PV protection controller"
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.875593  699204 shared_informer.go:311] Waiting for caches to sync for PV protection
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.900614  699204 event.go:307] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.938011  699204 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.024981  699204 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.025069  699204 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.025135  699204 shared_informer.go:311] Waiting for caches to sync for TTL after finished
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.045717  699204 controller.go:624] quota admission added evaluator for: deployments.apps
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.080987  699204 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.43.0.10"}
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.083009  699204 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: time="2024-01-23T15:00:32+02:00" level=info msg="Active TLS secret kube-system/k3s-serving (ver=221) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.54.65.165:10.54.65.165 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-vm-165.tests.lab.net:vm-165.tests.lab.net listener.cattle.io/fingerprint:SHA1=AEC8AF7D9D5044693ACDE9C5FD93E706E731CB87]"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.113956  699204 event.go:307] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.175810  699204 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.175880  699204 controllermanager.go:607] "Warning: controller is disabled" controller="cloud-node-lifecycle-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.175955  699204 serviceaccounts_controller.go:111] "Starting service account controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.175973  699204 shared_informer.go:311] Waiting for caches to sync for service account
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.234966  699204 event.go:307] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.255932  699204 event.go:307] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.275220  699204 event.go:307] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.295192  699204 event.go:307] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.311181  699204 event.go:307] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.329421  699204 controllermanager.go:642] "Started controller" controller="daemonset-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.329492  699204 controllermanager.go:607] "Warning: controller is disabled" controller="node-route-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.329636  699204 daemon_controller.go:291] "Starting daemon sets controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.329658  699204 shared_informer.go:311] Waiting for caches to sync for daemon sets
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.339337  699204 event.go:307] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.356467  699204 event.go:307] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.375711  699204 event.go:307] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.395665  699204 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.396113  699204 event.go:307] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: W0123 15:00:32.400855  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:32 vm-165 k3s[699204]: E0123 15:00:32.400958  699204 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 23 15:00:32 vm-165 k3s[699204]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 23 15:00:32 vm-165 k3s[699204]: E0123 15:00:32.401042  699204 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.476744  699204 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.476832  699204 controller.go:169] "Starting ephemeral volume controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.476900  699204 shared_informer.go:311] Waiting for caches to sync for ephemeral
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.592729  699204 serving.go:355] Generated self-signed cert in-memory
Jan 23 15:00:32 vm-165 k3s[699204]: time="2024-01-23T15:00:32+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.710432  699204 event.go:307] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.738557  699204 event.go:307] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.773408  699204 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.773568  699204 horizontal.go:200] "Starting HPA controller"
Jan 23 15:00:32 vm-165 k3s[699204]: I0123 15:00:32.773595  699204 shared_informer.go:311] Waiting for caches to sync for HPA
Jan 23 15:00:33 vm-165 k3s[699204]: {"level":"warn","ts":"2024-01-23T15:00:33.192053+0200","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.423744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
Jan 23 15:00:33 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:33.192613+0200","caller":"traceutil/trace.go:171","msg":"trace[659020154] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:292; }","duration":"170.081129ms","start":"2024-01-23T15:00:33.022487+0200","end":"2024-01-23T15:00:33.192568+0200","steps":["trace[659020154] 'range keys from in-memory index tree'  (duration: 169.155881ms)"],"step_count":1}
Jan 23 15:00:33 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:33.192621+0200","caller":"traceutil/trace.go:171","msg":"trace[432240650] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"162.96081ms","start":"2024-01-23T15:00:33.0296+0200","end":"2024-01-23T15:00:33.19256+0200","steps":["trace[432240650] 'process raft request'  (duration: 157.484165ms)"],"step_count":1}
Jan 23 15:00:33 vm-165 k3s[699204]: {"level":"warn","ts":"2024-01-23T15:00:33.193037+0200","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.220606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:0 size:5"}
Jan 23 15:00:33 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:00:33.193167+0200","caller":"traceutil/trace.go:171","msg":"trace[446311146] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:294; }","duration":"100.384084ms","start":"2024-01-23T15:00:33.092754+0200","end":"2024-01-23T15:00:33.193138+0200","steps":["trace[446311146] 'agreement among raft nodes before linearized reading'  (duration: 100.087363ms)"],"step_count":1}
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.197411  699204 event.go:307] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.219340  699204 resource_quota_controller.go:169] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219471  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219528  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219597  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219723  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219763  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219824  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219870  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219905  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.219946  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220028  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220091  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220182  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220225  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220297  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220337  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220380  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220480  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220535  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="addons.k3s.cattle.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220591  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220640  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220690  699204 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220776  699204 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220828  699204 resource_quota_controller.go:294] "Starting resource quota controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220880  699204 shared_informer.go:311] Waiting for caches to sync for resource quota
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.220920  699204 resource_quota_monitor.go:305] "QuotaMonitor running"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.224202  699204 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.43.96.143"}
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.225611  699204 event.go:307] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.228129  699204 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Jan 23 15:00:33 vm-165 k3s[699204]: W0123 15:00:33.236435  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.236519  699204 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 23 15:00:33 vm-165 k3s[699204]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.236607  699204 handler_proxy.go:137] error resolving kube-system/metrics-server: endpoints "metrics-server" not found
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.241579  699204 controllermanager.go:642] "Started controller" controller="job-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.241889  699204 job_controller.go:226] "Starting job controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.241967  699204 shared_informer.go:311] Waiting for caches to sync for job
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.271397  699204 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.271526  699204 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.271552  699204 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.277734  699204 controllermanager.go:168] Version: v1.28.5+k3s1
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285365  699204 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285421  699204 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285482  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285501  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285582  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.285660  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.287489  699204 secure_serving.go:213] Serving securely on 127.0.0.1:10258
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.287665  699204 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.288060  699204 leaderelection.go:250] attempting to acquire leader lease kube-system/k3s-cloud-controller-manager...
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.305846  699204 leaderelection.go:260] successfully acquired lease kube-system/k3s-cloud-controller-manager
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.306286  699204 event.go:307] "Event occurred" object="kube-system/k3s-cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="vm-165_a5002b72-c6f4-4e07-86c7-6b671cc3d99f became leader"
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.329116  699204 controllermanager.go:524] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Creating  event broadcaster"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.386641  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.386758  699204 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.386968  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:33 vm-165 k3s[699204]: W0123 15:00:33.396548  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.396651  699204 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.396675  699204 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan 23 15:00:33 vm-165 k3s[699204]: W0123 15:00:33.396807  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:33 vm-165 k3s[699204]: E0123 15:00:33.397093  699204 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 23 15:00:33 vm-165 k3s[699204]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.397793  699204 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Starting /v1, Kind=Node controller"
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Starting /v1, Kind=Pod controller"
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Starting discovery.k8s.io/v1, Kind=EndpointSlice controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.508852  699204 controllermanager.go:337] Started "cloud-node-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.508980  699204 node_controller.go:165] Sending events to api server.
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.509135  699204 node_controller.go:174] Waiting for informer caches to sync
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.509269  699204 controllermanager.go:337] Started "cloud-node-lifecycle-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.509467  699204 node_lifecycle_controller.go:113] Sending events to api server
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.509825  699204 controllermanager.go:337] Started "service-lb-controller"
Jan 23 15:00:33 vm-165 k3s[699204]: W0123 15:00:33.509850  699204 controllermanager.go:314] "node-route-controller" is disabled
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.510096  699204 controller.go:231] Starting service controller
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.510180  699204 shared_informer.go:311] Waiting for caches to sync for service
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.510365  699204 event.go:307] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.543120  699204 event.go:307] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.610570  699204 shared_informer.go:318] Caches are synced for service
Jan 23 15:00:33 vm-165 k3s[699204]: time="2024-01-23T15:00:33+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.913199  699204 event.go:307] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Jan 23 15:00:33 vm-165 k3s[699204]: I0123 15:00:33.995017  699204 event.go:307] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
Jan 23 15:00:34 vm-165 k3s[699204]: I0123 15:00:34.297615  699204 event.go:307] "Event occurred" object="vm-165.tests.lab.net" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="NodePasswordValidationComplete" message="Deferred node password secret validation complete"
Jan 23 15:00:34 vm-165 k3s[699204]: I0123 15:00:34.317001  699204 event.go:307] "Event occurred" object="kube-system/runtimes" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/runtimes.yaml\""
Jan 23 15:00:34 vm-165 k3s[699204]: I0123 15:00:34.530409  699204 event.go:307] "Event occurred" object="kube-system/runtimes" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/runtimes.yaml\""
Jan 23 15:00:35 vm-165 k3s[699204]: time="2024-01-23T15:00:35+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:36 vm-165 k3s[699204]: time="2024-01-23T15:00:36+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:37 vm-165 k3s[699204]: time="2024-01-23T15:00:37+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:38 vm-165 k3s[699204]: time="2024-01-23T15:00:38+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:40 vm-165 k3s[699204]: time="2024-01-23T15:00:40+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:41 vm-165 k3s[699204]: time="2024-01-23T15:00:41+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:42 vm-165 k3s[699204]: time="2024-01-23T15:00:42+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.344727  699204 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.344809  699204 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.345057  699204 node_ipam_controller.go:162] "Starting ipam controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.345103  699204 shared_informer.go:311] Waiting for caches to sync for node
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.363440  699204 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.363525  699204 expand_controller.go:328] "Starting expand controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.363598  699204 shared_informer.go:311] Waiting for caches to sync for expand
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.385741  699204 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.385926  699204 pvc_protection_controller.go:102] "Starting PVC protection controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.385965  699204 shared_informer.go:311] Waiting for caches to sync for PVC protection
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.402761  699204 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.403001  699204 gc_controller.go:101] "Starting GC controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.403057  699204 shared_informer.go:311] Waiting for caches to sync for GC
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.421106  699204 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.421425  699204 endpointslice_controller.go:264] "Starting endpoint slice controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.421482  699204 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.440585  699204 controllermanager.go:642] "Started controller" controller="statefulset-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.440942  699204 stateful_set.go:161] "Starting stateful set controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.441001  699204 shared_informer.go:311] Waiting for caches to sync for stateful set
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.457686  699204 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.457872  699204 publisher.go:102] "Starting root CA cert publisher controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.457919  699204 shared_informer.go:311] Waiting for caches to sync for crt configmap
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.476167  699204 controllermanager.go:642] "Started controller" controller="endpoints-controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.476358  699204 endpoints_controller.go:174] "Starting endpoint controller"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.476409  699204 shared_informer.go:311] Waiting for caches to sync for endpoint
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.484160  699204 shared_informer.go:311] Waiting for caches to sync for resource quota
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.492856  699204 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.505431  699204 shared_informer.go:311] Waiting for caches to sync for garbage collector
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.511314  699204 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.525100  699204 shared_informer.go:318] Caches are synced for cronjob
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.525366  699204 shared_informer.go:318] Caches are synced for TTL after finished
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.526917  699204 shared_informer.go:318] Caches are synced for ReplicationController
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.527967  699204 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.542299  699204 shared_informer.go:318] Caches are synced for job
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.551938  699204 shared_informer.go:318] Caches are synced for namespace
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.558067  699204 shared_informer.go:318] Caches are synced for crt configmap
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.563775  699204 shared_informer.go:318] Caches are synced for expand
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.572194  699204 shared_informer.go:318] Caches are synced for ReplicaSet
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.572347  699204 shared_informer.go:318] Caches are synced for certificate-csrapproving
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.573755  699204 shared_informer.go:318] Caches are synced for HPA
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.575752  699204 shared_informer.go:318] Caches are synced for PV protection
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.576169  699204 shared_informer.go:318] Caches are synced for service account
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.576552  699204 shared_informer.go:318] Caches are synced for endpoint
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.577090  699204 shared_informer.go:318] Caches are synced for ephemeral
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.586068  699204 shared_informer.go:318] Caches are synced for PVC protection
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.590469  699204 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.591214  699204 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.592452  699204 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
Jan 23 15:00:43 vm-165 k3s[699204]: W0123 15:00:43.616977  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:43 vm-165 k3s[699204]: E0123 15:00:43.617139  699204 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 23 15:00:43 vm-165 k3s[699204]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 23 15:00:43 vm-165 k3s[699204]: E0123 15:00:43.617307  699204 handler_proxy.go:137] error resolving kube-system/metrics-server: no endpoints available for service "metrics-server"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.641105  699204 shared_informer.go:318] Caches are synced for stateful set
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.674341  699204 shared_informer.go:318] Caches are synced for disruption
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.677753  699204 shared_informer.go:318] Caches are synced for deployment
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.684653  699204 shared_informer.go:318] Caches are synced for resource quota
Jan 23 15:00:43 vm-165 k3s[699204]: time="2024-01-23T15:00:43+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.703779  699204 shared_informer.go:318] Caches are synced for GC
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.705464  699204 controller.go:624] quota admission added evaluator for: replicasets.apps
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.712849  699204 event.go:307] "Event occurred" object="kube-system/local-path-provisioner" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-84db5d44d9 to 1"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.715927  699204 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-67c658944b to 1"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.716532  699204 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6799fbcd5 to 1"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.721116  699204 shared_informer.go:318] Caches are synced for resource quota
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.721632  699204 shared_informer.go:318] Caches are synced for endpoint_slice
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.727377  699204 shared_informer.go:318] Caches are synced for TTL
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.730050  699204 shared_informer.go:318] Caches are synced for daemon sets
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.730549  699204 shared_informer.go:318] Caches are synced for attach detach
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.745223  699204 shared_informer.go:318] Caches are synced for node
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.745378  699204 range_allocator.go:174] "Sending events to api server"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.745422  699204 range_allocator.go:178] "Starting range CIDR allocator"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.745432  699204 shared_informer.go:311] Waiting for caches to sync for cidrallocator
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.745444  699204 shared_informer.go:318] Caches are synced for cidrallocator
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.759033  699204 event.go:307] "Event occurred" object="kube-system/local-path-provisioner-84db5d44d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-84db5d44d9-jjw6f"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.759698  699204 event.go:307] "Event occurred" object="kube-system/metrics-server-67c658944b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-67c658944b-lv5k4"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.759983  699204 event.go:307] "Event occurred" object="kube-system/coredns-6799fbcd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6799fbcd5-wzffs"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.769666  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="55.583306ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.775439  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="58.900655ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.776222  699204 shared_informer.go:318] Caches are synced for persistent volume
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.778470  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-6799fbcd5" duration="61.861692ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.781309  699204 shared_informer.go:318] Caches are synced for taint
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.781482  699204 taint_manager.go:205] "Starting NoExecuteTaintManager"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.781586  699204 taint_manager.go:210] "Sending events to api server"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.781632  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="11.848595ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.781797  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="95.056µs"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.792597  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="17.053515ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.793548  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="188.012µs"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.794056  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-6799fbcd5" duration="15.486629ms"
Jan 23 15:00:43 vm-165 k3s[699204]: I0123 15:00:43.794170  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-6799fbcd5" duration="61.754µs"
Jan 23 15:00:44 vm-165 k3s[699204]: I0123 15:00:44.106488  699204 shared_informer.go:318] Caches are synced for garbage collector
Jan 23 15:00:44 vm-165 k3s[699204]: I0123 15:00:44.183181  699204 shared_informer.go:318] Caches are synced for garbage collector
Jan 23 15:00:44 vm-165 k3s[699204]: I0123 15:00:44.183305  699204 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
Jan 23 15:00:44 vm-165 k3s[699204]: W0123 15:00:44.617736  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:44 vm-165 k3s[699204]: E0123 15:00:44.617934  699204 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 23 15:00:44 vm-165 k3s[699204]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 23 15:00:44 vm-165 k3s[699204]: I0123 15:00:44.617960  699204 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan 23 15:00:44 vm-165 k3s[699204]: W0123 15:00:44.617969  699204 handler_proxy.go:93] no RequestInfo found in the context
Jan 23 15:00:44 vm-165 k3s[699204]: E0123 15:00:44.618068  699204 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
Jan 23 15:00:44 vm-165 k3s[699204]: I0123 15:00:44.619129  699204 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan 23 15:00:44 vm-165 k3s[699204]: time="2024-01-23T15:00:44+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:46 vm-165 k3s[699204]: time="2024-01-23T15:00:46+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:47 vm-165 k3s[699204]: time="2024-01-23T15:00:47+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:48 vm-165 k3s[699204]: time="2024-01-23T15:00:48+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:49 vm-165 k3s[699204]: time="2024-01-23T15:00:49+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:50 vm-165 k3s[699204]: time="2024-01-23T15:00:50+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:52 vm-165 k3s[699204]: time="2024-01-23T15:00:52+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:53 vm-165 k3s[699204]: time="2024-01-23T15:00:53+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:54 vm-165 k3s[699204]: time="2024-01-23T15:00:54+02:00" level=info msg="Waiting for control-plane node vm-165.tests.lab.net startup: nodes \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/klipper-helm:v0.8.2-build20230815"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/klipper-lb:v0.4.4"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/local-path-provisioner:v0.0.24"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/mirrored-coredns-coredns:1.10.1"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/mirrored-library-busybox:1.36.1"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/mirrored-library-traefik:2.10.5"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/mirrored-metrics-server:v0.6.3"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported docker.io/rancher/mirrored-pause:3.6"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Imported images from /var/lib/rancher/k3s/agent/images/k3s-airgap-images-amd64.tar.gz in 29.181608796s"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-log-max-files=4 --container-log-max-size=50Mi --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=vm-165.tests.lab.net --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.54.65.165 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Handling backend connection request [vm-165.tests.lab.net]"
Jan 23 15:00:55 vm-165 k3s[699204]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Jan 23 15:00:55 vm-165 k3s[699204]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Jan 23 15:00:55 vm-165 k3s[699204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.529850  699204 server.go:202] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.534181  699204 server.go:462] "Kubelet version" kubeletVersion="v1.28.5+k3s1"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.534280  699204 server.go:464] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.537877  699204 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.564814  699204 server.go:720] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.565599  699204 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.565898  699204 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.565933  699204 topology_manager.go:138] "Creating topology manager with none policy"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.565947  699204 container_manager_linux.go:301] "Creating device plugin manager"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.566086  699204 state_mem.go:36] "Initialized new in-memory state store"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.566219  699204 kubelet.go:393] "Attempting to sync node with API server"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.566264  699204 kubelet.go:298] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.566298  699204 kubelet.go:309] "Adding apiserver pod source"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.566316  699204 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.567761  699204 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.11-k3s2" apiVersion="v1"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.568696  699204 server.go:1227] "Started kubelet"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.569041  699204 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.569312  699204 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.569631  699204 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.569785  699204 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.569887  699204 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.571131  699204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.571295  699204 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.571474  699204 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"vm-165.tests.lab.net\" not found"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.571511  699204 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.571666  699204 reconciler_new.go:29] "Reconciler: start to sync state"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.571704  699204 server.go:462] "Adding debug handlers to kubelet server"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.580302  699204 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ad686d8dddf66c9f29cd4d4398b265bfb6fe54f80a65b0b564a7f0c5fc58d48\": not found" containerID="0ad686d8dddf66c9f29cd4d4398b265bfb6fe54f80a65b0b564a7f0c5fc58d48"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.580394  699204 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0ad686d8dddf66c9f29cd4d4398b265bfb6fe54f80a65b0b564a7f0c5fc58d48" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ad686d8dddf66c9f29cd4d4398b265bfb6fe54f80a65b0b564a7f0c5fc58d48\": not found"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.581653  699204 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e67e963ea942627c360508605fe19a3481e9c26304aeb5dc5b15d3cd78ebd13b\": not found" containerID="e67e963ea942627c360508605fe19a3481e9c26304aeb5dc5b15d3cd78ebd13b"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.581752  699204 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="e67e963ea942627c360508605fe19a3481e9c26304aeb5dc5b15d3cd78ebd13b" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e67e963ea942627c360508605fe19a3481e9c26304aeb5dc5b15d3cd78ebd13b\": not found"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.585986  699204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.586212  699204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"vm-165.tests.lab.net\" not found" node="vm-165.tests.lab.net"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.587723  699204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.587803  699204 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.587851  699204 kubelet.go:2303] "Starting kubelet main sync loop"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.588107  699204 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.673874  699204 kubelet_node_status.go:70] "Attempting to register node" node="vm-165.tests.lab.net"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.689142  699204 kubelet_node_status.go:73] "Successfully registered node" node="vm-165.tests.lab.net"
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.691877  699204 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.696505  699204 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"vm-165.tests.lab.net\" does not exist"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.697104  699204 node_controller.go:431] Initializing node vm-165.tests.lab.net with cloud provider
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.697532  699204 node_controller.go:240] error syncing 'vm-165.tests.lab.net': failed to get instance metadata for node vm-165.tests.lab.net: address annotations not yet set, requeuing
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.704406  699204 node_controller.go:431] Initializing node vm-165.tests.lab.net with cloud provider
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.704795  699204 node_controller.go:240] error syncing 'vm-165.tests.lab.net': failed to get instance metadata for node vm-165.tests.lab.net: address annotations not yet set, requeuing
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.714806  699204 node_controller.go:431] Initializing node vm-165.tests.lab.net with cloud provider
Jan 23 15:00:55 vm-165 k3s[699204]: E0123 15:00:55.714907  699204 node_controller.go:240] error syncing 'vm-165.tests.lab.net': failed to get instance metadata for node vm-165.tests.lab.net: address annotations not yet set, requeuing
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.718330  699204 node_controller.go:431] Initializing node vm-165.tests.lab.net with cloud provider
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Annotations and labels have been set successfully on node: vm-165.tests.lab.net"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Starting flannel with backend vxlan"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.728458  699204 range_allocator.go:380] "Set node PodCIDR" node="vm-165.tests.lab.net" podCIDRs=["10.42.0.0/24"]
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Flannel found PodCIDR assigned for node vm-165.tests.lab.net"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="The interface enp1s0 with ipv4 address 10.54.65.165 will be used by flannel"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.730913  699204 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.730961  699204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.730998  699204 state_mem.go:36] "Initialized new in-memory state store"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.731488  699204 kube.go:145] Waiting 10m0s for node controller to sync
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.731519  699204 kube.go:489] Starting kube subnet manager
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.733882  699204 policy_none.go:49] "None policy: Start"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.736129  699204 memory_manager.go:169] "Starting memorymanager" policy="None"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.736208  699204 state_mem.go:35] "Initializing new in-memory state store"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.751727  699204 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.752606  699204 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=vm-165.tests.lab.net --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Jan 23 15:00:55 vm-165 k3s[699204]: time="2024-01-23T15:00:55+02:00" level=info msg="Starting the netpol controller version v2.0.0-20230925161250-364f994b140b, built on 2023-12-27T16:01:02Z, go1.20.12"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.842028  699204 network_policy_controller.go:164] Starting network policy controller
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.851758  699204 node_controller.go:502] Successfully initialized node vm-165.tests.lab.net with cloud provider
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.852072  699204 event.go:307] "Event occurred" object="vm-165.tests.lab.net" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.855735  699204 node.go:141] Successfully retrieved node IP: 10.54.65.165
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.856011  699204 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.857601  699204 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.866039  699204 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869217  699204 server_others.go:152] "Using iptables Proxier"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869376  699204 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869395  699204 server_others.go:438] "Defaulting to no-op detect-local"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869430  699204 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869778  699204 server.go:846] "Version info" version="v1.28.5+k3s1"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.869803  699204 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870669  699204 config.go:188] "Starting service config controller"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870735  699204 shared_informer.go:311] Waiting for caches to sync for service config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870785  699204 config.go:97] "Starting endpoint slice config controller"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870799  699204 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870814  699204 config.go:315] "Starting node config controller"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.870857  699204 shared_informer.go:311] Waiting for caches to sync for node config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.901868  699204 network_policy_controller.go:176] Starting network policy controller full sync goroutine
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.963685  699204 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.970914  699204 shared_informer.go:318] Caches are synced for service config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.970971  699204 shared_informer.go:318] Caches are synced for node config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.970933  699204 shared_informer.go:318] Caches are synced for endpoint slice config
Jan 23 15:00:55 vm-165 k3s[699204]: I0123 15:00:55.978623  699204 topologycache.go:237] "Can't get CPU or zone information for node" node="vm-165.tests.lab.net"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.161280  699204 serving.go:355] Generated self-signed cert in-memory
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Stopped tunnel to 127.0.0.1:6443"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Connecting to proxy" url="wss://10.54.65.165:6443/v1-k3s/connect"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Tunnel authorizer set Kubelet Port 10250"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Handling backend connection request [vm-165.tests.lab.net]"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.567691  699204 apiserver.go:52] "Watching apiserver"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.672105  699204 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.731888  699204 kube.go:152] Node controller sync successful
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.732091  699204 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.753398  699204 kube.go:510] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.763918  699204 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.5+k3s1"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.763984  699204 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=info msg="Running flannel backend."
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.767289  699204 iptables.go:290] generated 3 rules
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.767291  699204 vxlan_network.go:65] watching for new subnet leases
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.768649  699204 iptables.go:290] generated 7 rules
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.771804  699204 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.771821  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.771883  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.771851  699204 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.772061  699204 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.772141  699204 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.772557  699204 secure_serving.go:213] Serving securely on 127.0.0.1:10259
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.772627  699204 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.798690  699204 iptables.go:283] bootstrap done
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.810046  699204 iptables.go:283] bootstrap done
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.872916  699204 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.873151  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.873496  699204 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.873788  699204 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-scheduler...
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.891699  699204 leaderelection.go:260] successfully acquired lease kube-system/kube-scheduler
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.906806  699204 topology_manager.go:215] "Topology Admit Handler" podUID="06734a07-9939-4da4-b959-ae54c30be85f" podNamespace="kube-system" podName="metrics-server-67c658944b-lv5k4"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.906833  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="152.161µs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.909063  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-6799fbcd5" duration="146.801µs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.909373  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="174.878µs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.909619  699204 topology_manager.go:215] "Topology Admit Handler" podUID="e5551773-502a-4e54-8d2d-63baf98cc232" podNamespace="kube-system" podName="coredns-6799fbcd5-wzffs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.909921  699204 topology_manager.go:215] "Topology Admit Handler" podUID="d17ae967-0fc9-4389-9f1b-078ad94abbcf" podNamespace="kube-system" podName="local-path-provisioner-84db5d44d9-jjw6f"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.947660  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="147.031µs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.967796  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="166.885µs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.980107  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d17ae967-0fc9-4389-9f1b-078ad94abbcf-config-volume\") pod \"local-path-provisioner-84db5d44d9-jjw6f\" (UID: \"d17ae967-0fc9-4389-9f1b-078ad94abbcf\") " pod="kube-system/local-path-provisioner-84db5d44d9-jjw6f"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.980212  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p74x\" (UniqueName: \"kubernetes.io/projected/d17ae967-0fc9-4389-9f1b-078ad94abbcf-kube-api-access-6p74x\") pod \"local-path-provisioner-84db5d44d9-jjw6f\" (UID: \"d17ae967-0fc9-4389-9f1b-078ad94abbcf\") " pod="kube-system/local-path-provisioner-84db5d44d9-jjw6f"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.980500  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/06734a07-9939-4da4-b959-ae54c30be85f-tmp-dir\") pod \"metrics-server-67c658944b-lv5k4\" (UID: \"06734a07-9939-4da4-b959-ae54c30be85f\") " pod="kube-system/metrics-server-67c658944b-lv5k4"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.980688  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n9kf\" (UniqueName: \"kubernetes.io/projected/06734a07-9939-4da4-b959-ae54c30be85f-kube-api-access-6n9kf\") pod \"metrics-server-67c658944b-lv5k4\" (UID: \"06734a07-9939-4da4-b959-ae54c30be85f\") " pod="kube-system/metrics-server-67c658944b-lv5k4"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.980896  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume\") pod \"coredns-6799fbcd5-wzffs\" (UID: \"e5551773-502a-4e54-8d2d-63baf98cc232\") " pod="kube-system/coredns-6799fbcd5-wzffs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.981009  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-custom-config-volume\") pod \"coredns-6799fbcd5-wzffs\" (UID: \"e5551773-502a-4e54-8d2d-63baf98cc232\") " pod="kube-system/coredns-6799fbcd5-wzffs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.981106  699204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vkb\" (UniqueName: \"kubernetes.io/projected/e5551773-502a-4e54-8d2d-63baf98cc232-kube-api-access-28vkb\") pod \"coredns-6799fbcd5-wzffs\" (UID: \"e5551773-502a-4e54-8d2d-63baf98cc232\") " pod="kube-system/coredns-6799fbcd5-wzffs"
Jan 23 15:00:56 vm-165 k3s[699204]: I0123 15:00:56.987884  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-6799fbcd5" duration="122.239µs"
Jan 23 15:00:57 vm-165 k3s[699204]: E0123 15:00:57.082019  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:00:57.581872216 +0200 EET m=+35.811588534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:00:57 vm-165 k3s[699204]: E0123 15:00:57.585972  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:00:58.585937131 +0200 EET m=+36.815653339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:00:58 vm-165 k3s[699204]: time="2024-01-23T15:00:58+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:00:58 vm-165 k3s[699204]: E0123 15:00:58.592084  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:01:00.592031241 +0200 EET m=+38.821747492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.639193  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="12.395766ms"
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.639411  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/local-path-provisioner-84db5d44d9" duration="100.723µs"
Jan 23 15:00:58 vm-165 k3s[699204]: time="2024-01-23T15:00:58+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.783101  699204 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.783410  699204 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="vm-165.tests.lab.net"
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.783433  699204 event.go:307] "Event occurred" object="vm-165.tests.lab.net" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node vm-165.tests.lab.net event: Registered Node vm-165.tests.lab.net in Controller"
Jan 23 15:00:58 vm-165 k3s[699204]: I0123 15:00:58.783576  699204 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
Jan 23 15:00:59 vm-165 k3s[699204]: time="2024-01-23T15:00:59+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:00:59 vm-165 k3s[699204]: time="2024-01-23T15:00:59+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:00:59 vm-165 k3s[699204]: I0123 15:00:59.638222  699204 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/local-path-provisioner-84db5d44d9-jjw6f" podStartSLOduration=16.6381034 podCreationTimestamp="2024-01-23 15:00:43 +0200 EET" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-23 15:00:58.626887134 +0200 EET m=+36.856603462" watchObservedRunningTime="2024-01-23 15:00:59.6381034 +0200 EET m=+37.867819689"
Jan 23 15:00:59 vm-165 k3s[699204]: I0123 15:00:59.638838  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="122.472µs"
Jan 23 15:00:59 vm-165 k3s[699204]: time="2024-01-23T15:00:59+02:00" level=info msg="Labels and annotations have been set successfully on node: vm-165.tests.lab.net"
Jan 23 15:00:59 vm-165 k3s[699204]: time="2024-01-23T15:00:59+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:01:00 vm-165 k3s[699204]: E0123 15:01:00.604732  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:01:04.604678391 +0200 EET m=+42.834394688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:01:04 vm-165 k3s[699204]: E0123 15:01:04.637275  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:01:12.637184694 +0200 EET m=+50.866900941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:01:10 vm-165 k3s[699204]: time="2024-01-23T15:01:10+02:00" level=info msg="Adding etcd member vm-165.tests.lab.net-41f54d96 status condition"
Jan 23 15:01:10 vm-165 k3s[699204]: time="2024-01-23T15:01:10+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:01:12 vm-165 k3s[699204]: E0123 15:01:12.698421  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:01:28.698358883 +0200 EET m=+66.928075221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:01:13 vm-165 k3s[699204]: E0123 15:01:13.696853  699204 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
Jan 23 15:01:14 vm-165 k3s[699204]: I0123 15:01:14.124745  699204 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
Jan 23 15:01:15 vm-165 k3s[699204]: I0123 15:01:15.464329  699204 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/metrics-server-67c658944b-lv5k4" podStartSLOduration=32.46415551 podCreationTimestamp="2024-01-23 15:00:43 +0200 EET" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-23 15:00:59.638635381 +0200 EET m=+37.868351722" watchObservedRunningTime="2024-01-23 15:01:15.46415551 +0200 EET m=+53.693871820"
Jan 23 15:01:15 vm-165 k3s[699204]: I0123 15:01:15.482642  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="18.447615ms"
Jan 23 15:01:15 vm-165 k3s[699204]: I0123 15:01:15.483447  699204 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-67c658944b" duration="292.945µs"
Jan 23 15:01:15 vm-165 k3s[699204]: I0123 15:01:15.561750  699204 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
Jan 23 15:01:27 vm-165 k3s[699204]: {"level":"warn","ts":"2024-01-23T15:01:27.200323+0200","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.081604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:6464"}
Jan 23 15:01:27 vm-165 k3s[699204]: {"level":"info","ts":"2024-01-23T15:01:27.200576+0200","caller":"traceutil/trace.go:171","msg":"trace[1476067477] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:593; }","duration":"107.396214ms","start":"2024-01-23T15:01:27.093144+0200","end":"2024-01-23T15:01:27.20054+0200","steps":["trace[1476067477] 'range keys from in-memory index tree'  (duration: 106.750094ms)"],"step_count":1}
Jan 23 15:01:28 vm-165 k3s[699204]: E0123 15:01:28.713328  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:02:00.713221366 +0200 EET m=+98.942937736 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:02:00 vm-165 k3s[699204]: E0123 15:02:00.741033  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:03:04.740967382 +0200 EET m=+162.970683632 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:02:59 vm-165 k3s[699204]: E0123 15:02:59.976799  699204 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-6799fbcd5-wzffs" podUID="e5551773-502a-4e54-8d2d-63baf98cc232"
Jan 23 15:03:04 vm-165 k3s[699204]: E0123 15:03:04.800928  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:05:06.800871667 +0200 EET m=+285.030587951 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:05:06 vm-165 k3s[699204]: E0123 15:05:06.815139  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:07:08.815030265 +0200 EET m=+407.044746512 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:05:18 vm-165 k3s[699204]: E0123 15:05:18.590003  699204 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-6799fbcd5-wzffs" podUID="e5551773-502a-4e54-8d2d-63baf98cc232"
Jan 23 15:06:01 vm-165 k3s[699204]: time="2024-01-23T15:06:01+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
Jan 23 15:07:08 vm-165 k3s[699204]: E0123 15:07:08.894663  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:09:10.894587981 +0200 EET m=+529.124304231 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts
Jan 23 15:07:33 vm-165 k3s[699204]: E0123 15:07:33.590181  699204 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-6799fbcd5-wzffs" podUID="e5551773-502a-4e54-8d2d-63baf98cc232"
Jan 23 15:09:10 vm-165 k3s[699204]: E0123 15:09:10.977718  699204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume podName:e5551773-502a-4e54-8d2d-63baf98cc232 nodeName:}" failed. No retries permitted until 2024-01-23 15:11:12.977656238 +0200 EET m=+651.207372520 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5551773-502a-4e54-8d2d-63baf98cc232-config-volume") pod "coredns-6799fbcd5-wzffs" (UID: "e5551773-502a-4e54-8d2d-63baf98cc232") : configmap references non-existent config key: NodeHosts

@brandond
Copy link
Member

Please, please attach logs instead of pasting pages and pages into your comment.

@brandond
Copy link
Member

Jan 23 15:00:31 vm-165 k3s[699204]: I0123 15:00:31.938011  699204 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""

Jan 23 15:00:56 vm-165 k3s[699204]: time="2024-01-23T15:00:56+02:00" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"

The coredns manifest includes the coredns configmap, yet the apiserver indicates that the configmap does not exist. Can you confirm the contents of that file on disk?

@harsimranmaan
Copy link
Contributor

I see the same issue when reinstalling k3s on an existing node.

k3s -v
k3s version v1.28.6-rc2+k3s1 (39a00015)
go version go1.20.13
Jan 23 21:08:33 ip-172-31-14-222 k3s[177782]: E0123 21:08:33.181006  177782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8deee67-4280-4982-bb1e-d69722202533-config-volume podName:f8deee67-4280-4982-bb1e-d69722202533 nodeName:}" failed. No retries permitted until 2024-01-23 21:10:35.180986425 +0000 UTC m=+632.873841191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f8deee67-4280-4982-bb1e-d69722202533-config-volume") pod "coredns-6799fbcd5-9gx7m" (UID: "f8deee67-4280-4982-bb1e-d69722202533") : configmap references non-existent config key: NodeHosts
Jan 23 21:09:05 ip-172-31-14-222 k3s[177782]: E0123 21:09:05.595085  177782 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-6799fbcd5-9gx7m" podUID="f8deee67-4280-4982-bb1e-d69722202533"
Jan 23 21:10:02 ip-172-31-14-222 k3s[177782]: time="2024-01-23T21:10:02Z" level=info msg="COMPACT revision 0 has already been compacted"
Jan 23 21:10:35 ip-172-31-14-222 k3s[177782]: E0123 21:10:35.223217  177782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8deee67-4280-4982-bb1e-d69722202533-config-volume podName:f8deee67-4280-4982-bb1e-d69722202533 nodeName:}" failed. No retries permitted until 2024-01-23 21:12:37.223197275 +0000 UTC m=+754.916052040 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f8deee67-4280-4982-bb1e-d69722202533-config-volume") pod "coredns-6799fbcd5-9gx7m" (UID: "f8deee67-4280-4982-bb1e-d69722202533") : configmap references non-existent config key: NodeHosts
k describe po -n kube-system   coredns-6799fbcd5-9gx7m
Name:                 coredns-6799fbcd5-9gx7m
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 ip-172-31-14-222/172.31.14.222
Start Time:           Tue, 23 Jan 2024 21:00:19 +0000
Labels:               k8s-app=kube-dns
                      pod-template-hash=6799fbcd5
Annotations:          <none>
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-6799fbcd5
Containers:
  coredns:
    Container ID:
    Image:         rancher/mirrored-coredns-coredns:1.10.1
    Image ID:
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /etc/coredns/custom from custom-config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fb8v9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  custom-config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns-custom
    Optional:  true
  kube-api-access-fb8v9:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               kubernetes.io/os=linux
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                              node-role.kubernetes.io/master:NoSchedule op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  kubernetes.io/hostname:DoNotSchedule when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    11m                 default-scheduler  Successfully assigned kube-system/coredns-6799fbcd5-9gx7m to ip-172-31-14-222
  Warning  FailedMount  57s (x13 over 11m)  kubelet            MountVolume.SetUp failed for volume "config-volume" : configmap references non-existent config key: NodeHosts

@harsimranmaan
Copy link
Contributor

Node looks good

Name:               ip-172-31-14-222
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-172-31-14-222
                    kubernetes.io/os=linux
                    node-role.appliance.securiti.io/primary=true
                    node-role.kubernetes.io/control-plane=true
                    node-role.kubernetes.io/master=true
                    role=node
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"76:74:84:50:87:8e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.31.14.222
                    k3s.io/encryption-config-hash: start-dcca54130d05e9b8b799b299d8b94a668ec0c20a2122be382e838160c7981bd5
                    k3s.io/node-args:
                      ["server","--write-kubeconfig-mode","0600","--token-file","/etc/rancher/k3s/securiti-token","--service-cidr","10.43.0.0/16","--cluster-cid...
                    k3s.io/node-config-hash: 5K4Q4MGONAD4L4GOVZ64G7I3DBB6O4MU3YFVMK7ZUSSUYUZAJD3Q====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/mnt/rancher/k3s/data/46057a1eb55d262c2ef9480c0ed431b83123f777044ecff60bae0c2906493fb8"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 23 Jan 2024 21:00:14 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  ip-172-31-14-222
  AcquireTime:     <unset>
  RenewTime:       Tue, 23 Jan 2024 21:17:36 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 23 Jan 2024 21:15:43 +0000   Tue, 23 Jan 2024 21:00:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 23 Jan 2024 21:15:43 +0000   Tue, 23 Jan 2024 21:00:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 23 Jan 2024 21:15:43 +0000   Tue, 23 Jan 2024 21:00:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 23 Jan 2024 21:15:43 +0000   Tue, 23 Jan 2024 21:00:14 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.31.14.222
  Hostname:    ip-172-31-14-222
Capacity:
  cpu:                8
  ephemeral-storage:  157252588Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32300788Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  152975317487
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32300788Ki
  pods:               110
System Info:
  Machine ID:                 ec259277641405433b30fac0ae0f31f2
  System UUID:                ec259277-6414-0543-3b30-fac0ae0f31f2
  Boot ID:                    a6ea1790-f248-465b-af26-01cfa5c0a9ed
  Kernel Version:             5.14.21-150400.24.100-default
  OS Image:                   SUSE Linux Enterprise Server 15 SP4
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.11-k3s2
  Kubelet Version:            v1.28.6-rc2+k3s1
  Kube-Proxy Version:         v1.28.6-rc2+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
Non-terminated Pods:          (3 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6799fbcd5-9gx7m                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
  kube-system                 local-path-provisioner-84db5d44d9-j5k6l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
  kube-system                 metrics-server-67c658944b-wkbk8            100m (1%)     0 (0%)      70Mi (0%)        0 (0%)         17m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (2%)   0 (0%)
  memory             140Mi (0%)  170Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                          Age                From             Message
  ----     ------                          ----               ----             -------
  Normal   Starting                        17m                kube-proxy
  Normal   NodePasswordValidationComplete  17m                k3s-supervisor   Deferred node password secret validation complete
  Normal   Starting                        17m                kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity             17m                kubelet          invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced         17m                kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory         17m (x2 over 17m)  kubelet          Node ip-172-31-14-222 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure           17m (x2 over 17m)  kubelet          Node ip-172-31-14-222 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID            17m (x2 over 17m)  kubelet          Node ip-172-31-14-222 status is now: NodeHasSufficientPID
  Normal   NodeReady                       17m                kubelet          Node ip-172-31-14-222 status is now: NodeReady
  Normal   RegisteredNode                  17m                node-controller  Node ip-172-31-14-222 event: Registered Node ip-172-31-14-222 in Controller
  

@harsimranmaan
Copy link
Contributor

The manifest on disk is missing NodeHosts

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        import /etc/coredns/custom/*.override
    }
    import /etc/coredns/custom/*.server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  revisionHistoryLimit: 0
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: "system-cluster-critical"
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"
      nodeSelector:
        kubernetes.io/os: linux
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              k8s-app: kube-dns
      containers:
      - name: coredns
        image: rancher/mirrored-coredns-coredns:1.10.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: custom-config-volume
          mountPath: /etc/coredns/custom
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 2
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
            - key: NodeHosts
              path: NodeHosts
        - name: custom-config-volume
          configMap:
            name: coredns-custom
            optional: true
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.43.0.10
  clusterIPs: [10.43.0.10]
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
  ipFamilyPolicy: SingleStack
  

@harsimranmaan
Copy link
Contributor

The only relevant line in the logs

Jan 22 23:20:13 ip-172-31-14-222 k3s[60896]: W0122 23:20:13.562441   60896 reflector.go:458] k8s.io/[email protected]/tools/cache/reflector.go:229: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
Jan 23 21:00:02 ip-172-31-14-222 k3s[177782]: time="2024-01-23T21:00:02Z" level=error msg="error while range on /registry/serviceaccounts/ /registry/serviceaccounts/: context canceled"
Jan 23 21:00:02 ip-172-31-14-222 k3s[177782]: time="2024-01-23T21:00:02Z" level=error msg="error while range on /registry/serviceaccounts/ /registry/serviceaccounts/: context canceled"
Jan 23 21:00:03 ip-172-31-14-222 k3s[177782]: E0123 21:00:03.287253  177782 controller.go:102] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service

@brandond
Copy link
Member

brandond commented Jan 24, 2024

The manifest on disk isn't intended to have that key. The key is supposed to be added and modified by a controller that runs on the servers. However that controller has logged errors complaining that the configmap itself does not exist. Are you able to view that configmap in the running cluster?

@safderali5
Copy link
Author

safderali5 commented Jan 24, 2024

In my setup the configmap is visible in the cluster but still controller is unable to fetch configmap.

$ kubectl -n kube-system get configmap coredns -o yaml

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        import /etc/coredns/custom/*.override
    }
    import /etc/coredns/custom/*.server
kind: ConfigMap
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQjtdZHTuafAlCgq488xUSi9wK2AybEFDXvhwR2e8QQFHCnh50ZkloTJCcf8lP6NTIqUyuCkNJiSp9LJP5czoLjryztTWB0uE2iYmvjFuVSFenJsHx6tFf41gvGY6Y0Eshz/9D2e0OSZfIJVvMZExwzusSf/I9SIcQQNvaG6a+r/XVdV7abBddPtsN9W66Eedi0N7aberM22zaHf6t0tcPsIAAD//8Ix+PfoAQAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: coredns
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2024-01-23T13:52:29Z"
  labels:
    objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57
  name: coredns
  namespace: kube-system
  resourceVersion: "234"
  uid: d2882fe7-c7df-4ee7-a3b6-b6446ae9869b

@brandond
Copy link
Member

I'm unable to reproduce this. All of our CI tests also ensure that coredns comes up properly in order for any commit to be merged. Is there anything unique about this system? Can you duplicate this on a clean host?

I will also note that you said you're running 1.29.0, but the logs show 1.28.5. Are you seeing the same on both versions?

@safderali5
Copy link
Author

Yes, I see this on both versions 1.28 and 1.29. To me it happens when trying to upgrade k3s:

  • Install k3s 1.27.X
  • upgrade to 1.28.X or 1.29.X
  • uninstall k3s

repeat the same loop again on the same VM.

If the VM is fresh, the installation goes through. But this is important for us to make sure that upgrade to a production will not have same problem, hence I am trying to figure out the root cause of it.

@brandond
Copy link
Member

brandond commented Jan 24, 2024

@safderali5 what process are you using to upgrade from 1.27? When you upgrade to 1.29, are you stepping through 1.28 first? Can you list the specific patch releases that you are upgrading from and to?

Similarly, how are you uninstalling k3s? Can you confirm that the uninstall script is completing successfully and all files are removed from disk?

@brandond
Copy link
Member

@harrisonbc was your cluster in question also upgraded, and if so from what version?

@harrisonbc
Copy link

I will find out

@brandond
Copy link
Member

@safderali5 your kubectl get node output is inconsistent, but at least one of your nodes is using a non-standard data-dir:
k3s.io/node-env: {"K3S_DATA_DIR":"/mnt/rancher/k3s/data/46057a1eb55d262c2ef9480c0ed431b83123f777044ecff60bae0c2906493fb8"}

The uninstall script only removes content from the default data-dir at /var/lib/rancher/k3s, so you are not actually uninstalling K3s unless you manually remove the content from this path. I suspect you are simply re-testing on the same cluster over and over again.

@safderali5
Copy link
Author

@brandond it's @harsimranmaan nodes that are using non standard data dir.

@safderali5
Copy link
Author

safderali5 commented Jan 25, 2024

@safderali5 what process are you using to upgrade from 1.27? When you upgrade to 1.29, are you stepping through 1.28 first? Can you list the specific patch releases that you are upgrading from and to?

Similarly, how are you uninstalling k3s? Can you confirm that the uninstall script is completing successfully and all files are removed from disk?

I am using uninstall script generated by k3s and located at /usr/local/bin/k3s-uninstall.sh.
My upgrade path is 1.27.3+k3s1 ---> 1.28.5+k3s1. Installing 1.29 version was only one trial.

@safderali5
Copy link
Author

I have done a fresh trial on a VM running k3s 1.28.5+k3s1, Steps include.

  1. Remove existing installation using script /usr/local/bin/k3s-uninstall.sh. (This installation had same problem)
  2. Reinstall the same version 1.28.5+k3s1 again on the same host

Problem is still the same, coredns configmap is not updated with NodeHosts.
Here are the logs of this trial:
coredns-configmap.yaml.log
coredns-pod.log
k3s.log
node.yaml.log
uninstall-old-k3s.log

@brandond
Copy link
Member

brandond commented Jan 25, 2024

You've not mentioned this this at any point, but from your node yaml I see that you have a bunch of extra configuration and env vars. Several of the env vars are not used by K3s itself for anything, and I am unsure what you're trying to achieve by setting them.

Can you provide the exact command you are running to install k3s, along with any config that you are placing in config.yaml, and any other config that you are passing to the core Kubernetes components, such as the audit policy file? Please also call out anything that you're deploying to the cluster that might restrict or otherwise interfere with access to resources, such as OPA Gatekeeper or Kyverno.

Also, at one point you were using a custom data-dir on some of the nodes; is this still something you're doing when testing?

k3s.io/node-args:
  - server
  - --cluster-init
  - "true"
  - --disable
  - traefik
  - --default-local-storage-path
  - /var/lib/rancher/storage
  - --disable-helm-controller
  - "true"
  - --https-listen-port
  - "6443"
  - --kube-apiserver-arg
  - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log
  - --kube-apiserver-arg
  - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml
  - --kube-apiserver-arg
  - audit-log-maxage=30
  - --kube-apiserver-arg
  - audit-log-maxbackup=10
  - --kube-apiserver-arg
  - audit-log-maxsize=100
  - --kube-apiserver-arg
  - request-timeout=300s
  - --kube-apiserver-arg
  - service-account-lookup=true
  - --kube-apiserver-arg
  - anonymous-auth=false
  - --kube-cloud-controller-manager-arg
  - webhook-secure-port=0
  - --kube-controller-manager-arg
  - terminated-pod-gc-threshold=12500
  - --kube-controller-manager-arg
  - use-service-account-credentials=true
  - --kubelet-arg
  - container-log-max-files=4
  - --kubelet-arg
  - container-log-max-size=50Mi
  - --kubelet-arg
  - streaming-connection-idle-timeout=5m
  - --kubelet-arg
  - make-iptables-util-chains=true
  - --node-external-ip
  - 10.54.65.165
  - --node-name
  - vm-165.tests.lab.net
  - --prefer-bundled-bin
  - "true"
  - --protect-kernel-defaults
  - "true"
  - --secrets-encryption
  - "true"
  - --tls-san
  - 10.54.65.165
  - --write-kubeconfig-mode
  - "0600"
k3s.io/node-env:
  K3S_DATA_DIR: /var/lib/rancher/k3s/data/28f7e87eba734b7f7731dc900e2c84e0e98ce869f3dcf57f65dc7bbb80e12e56
  K3S_INTERNAL_CERTS_EXPIRATION_DAYS: "730" 
  K3S_UPGRADE: "false"

You are also setting node-external-ip to the node's internal IP, and adding the internal IP to the SAN list. Neither of these are necessary; what is your intent with this configuration? This may be causing additional odd behavior.

  status:
    addresses:
    - address: 10.54.65.165
      type: InternalIP
    - address: 10.54.65.165
      type: ExternalIP
    - address: vm-165.tests.lab.net
      type: Hostname

@safderali5
Copy link
Author

safderali5 commented Jan 26, 2024

My installation method is:

    export INSTALL_K3S_SELINUX_WARN=true
    export INSTALL_K3S_SKIP_DOWNLOAD=true
    export INSTALL_K3S_SKIP_SELINUX_RPM=true
    ./install.sh

The script install.sh is from k3s source code of version 1.28.5+k3s1.

While the config.yam passed to the k3s is:

cluster-init: true
disable:
  - traefik
default-local-storage-path: /var/lib/rancher/storage
disable-helm-controller: true
https-listen-port: 6443
kube-apiserver-arg:
  - 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  - 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
  - 'audit-log-maxage=30'
  - 'audit-log-maxbackup=10'
  - 'audit-log-maxsize=100'
  - 'request-timeout=300s'
  - 'service-account-lookup=true'
  - 'anonymous-auth=false'
kube-cloud-controller-manager-arg:
  - 'webhook-secure-port=0'
kube-controller-manager-arg:
  - 'terminated-pod-gc-threshold=12500'
  - 'use-service-account-credentials=true'
kubelet-arg:
  - 'container-log-max-files=4'
  - 'container-log-max-size=50Mi'
  - 'streaming-connection-idle-timeout=5m'
  - 'make-iptables-util-chains=true'
node-external-ip: 10.54.65.139
node-name: vm-139.tests.lab.net
prefer-bundled-bin: true
protect-kernel-defaults: true
secrets-encryption: true
tls-san:
  - 10.54.65.139
write-kubeconfig-mode: "0600"

audit policy:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

There is custom path used for persistent storage used by local path provisioner. Other storage paths are default in my setup.

@safderali5
Copy link
Author

The node external IP is set so that loadbalancer service shall get that IP, as per the docs:
"If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list"

@brandond
Copy link
Member

brandond commented Jan 26, 2024

Where are the K3S_INTERNAL_CERTS_EXPIRATION_DAYS=730 and K3S_UPGRADE=false env vars coming from? They don't do anything, but I'm curious what you were trying to do with those.

"If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list"

You conveniently cut off the last half of that paragraph:

Otherwise, the node's internal IP is used.

If there is no external IP, then the internal IP is used. The external IP overrides the internal IP for the purposes of setting the ServiceLB ingress address. If they are the same, you should not set it.

Similarly, you do not need to add the node IP to the tls-san list. The node IP and hostname are always included in the SAN list.

@avoidik
Copy link

avoidik commented Jan 28, 2024

I was facing the very same odd behavior when the coredns CM did not contain the NodeHosts entry, here is the plain config to reproduce the problem:

# mkdir -p /var/lib/rancher/k3s/agent/images
# curl -fsSL https://github.com/k3s-io/k3s/releases/download/v1.28.5%2Bk3s1/k3s-airgap-images-amd64.tar.zst -o /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar.zst
# export INSTALL_K3S_SKIP_DOWNLOAD="true"
# export INSTALL_K3S_VERSION="v1.28.5+k3s1"
# export INSTALL_K3S_EXEC="server --node-ip=192.168.0.200 --flannel-iface=enp0s8 --write-kubeconfig-mode=644 --kube-apiserver-arg=service-node-port-range=30000-30100 --secrets-encryption --disable=traefik --disable=servicelb --disable-helm-controller --disable-network-policy --kubelet-arg cloud-provider=external"
# curl -fsSL https://get.k3s.io -o install.sh
# sh install.sh

the node itself was in absolutely normal shape:

status:
  addresses:
  - address: 192.168.0.200
    type: InternalIP
  - address: master
    type: Hostname

but the coredns pod wasn`t starting because of the missing entry in the CM, the k3s systemd unit has had many failed attempts to read the entire CM like this:

Jan 28 23:01:51 master k3s[2619]: time="2024-01-28T23:01:51Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"

apparently, caused by https://github.com/k3s-io/k3s/blob/master/pkg/node/controller.go#L81

I run sudo systemctl restart k3s.service and the problem has disappeared, it's able to identify the CM properly.

I assume there's some kind of a race-condition, when the controller is too fast to read this data from the CM, fails, then coredns deployment kicks in and stuck trying to access missing CM reference in the deployment in here. Eventually only normal systemd unit restart could have solved this issue, so that the controller was able to update the CM entry.

@brandond
Copy link
Member

The controller's cache should get populated eventually; it should succeed after a few retries at worst. I'm curious what's preventing the configmap cache from getting filled.

@brandond
Copy link
Member

brandond commented Jan 29, 2024

@avoidik I am running that same release of K3s with the same args and I am unable to reproduce:

  metadata:
    annotations:
      k3s.io/node-args: '["server","--node-ip","172.17.0.7","--flannel-iface","eth0","--write-kubeconfig-mode","644","--kube-apiserver-arg","service-node-port-range=30000-30100","--secrets-encryption","--disable","traefik","--disable","servicelb","--disable-helm-controller","--disable-network-policy","--kubelet-arg","cloud-provider=external","--debug"]'
  status:
    nodeInfo:
      architecture: amd64
      bootID: 6751d5d3-977d-4c16-8d3f-9ae4cc0ec497
      containerRuntimeVersion: containerd://1.7.11-k3s2
      kernelVersion: 6.2.0-1014-aws
      kubeProxyVersion: v1.28.5+k3s1
      kubeletVersion: v1.28.5+k3s1

If you start k3s with --debug you should see a log message when the configmap is created, after which the node hosts entry will be set:

I0129 21:39:18.945026      53 shared_informer.go:311] Waiting for caches to sync for configmaps
I0129 21:39:19.045107      53 shared_informer.go:318] Caches are synced for configmaps
WARN[0004] Unable to fetch coredns config map: configmaps "coredns" not found
WARN[0004] Unable to fetch coredns config map: configmaps "coredns" not found
INFO[0004] Cluster dns configmap has been set successfully
WARN[0004] Unable to fetch coredns config map: configmaps "coredns" not found
WARN[0004] Unable to fetch coredns config map: configmaps "coredns" not found
INFO[0004] Starting /v1, Kind=ConfigMap controller
WARN[0005] Unable to fetch coredns config map: configmaps "coredns" not found
DEBU[0006] DesiredSet - Created /v1, Kind=ConfigMap kube-system/coredns for  kube-system/coredns
DEBU[0006] DesiredSet - Created /v1, Kind=ConfigMap kube-system/local-path-config for  kube-system/local-storage
INFO[0007] Updated coredns node hosts entry [172.17.0.7 k3s-server-1]

@avoidik
Copy link

avoidik commented Jan 29, 2024

Is the coredns pod still in the ContainerCreating state?

@brandond
Copy link
Member

No, because the node host entry was added - see the last log line.

brandond@dev01:~$ kubectl get pod -A -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP          NODE           NOMINATED NODE   READINESS GATES
kube-system   coredns-6799fbcd5-rwxbb                   1/1     Running   0          20m   10.42.0.3   k3s-server-1   <none>           <none>
kube-system   local-path-provisioner-84db5d44d9-8qpgx   1/1     Running   0          20m   10.42.0.2   k3s-server-1   <none>           <none>
kube-system   metrics-server-67c658944b-69lvq           1/1     Running   0          20m   10.42.0.4   k3s-server-1   <none>           <none>

@avoidik
Copy link

avoidik commented Feb 4, 2024

@brandond I now see what's going on when the --debug flag is set, I was using just a single master-node cluster without the worker nodes booted, I saw the following messages on repeat:

Feb 04 14:05:28 master k3s[2552]: E0204 14:05:28.564159    2552 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf89b133-b239-422f-b721-2f03c096caeb-config-volume podName:bf89b133-b239-422f-b721-2f03c096caeb nodeName:}" failed. No retries permitted until 2024-02-04 14:07:30.564092871 +0000 UTC m=+270.574186074 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf89b133-b239-422f-b721-2f03c096caeb-config-volume") pod "coredns-6799fbcd5-b2nl4" (UID: "bf89b133-b239-422f-b721-2f03c096caeb") : configmap references non-existent config key: NodeHosts
...
Feb 04 14:07:30 master k3s[2552]: E0204 14:07:30.634478    2552 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf89b133-b239-422f-b721-2f03c096caeb-config-volume podName:bf89b133-b239-422f-b721-2f03c096caeb nodeName:}" failed. No retries permitted until 2024-02-04 14:09:32.634419534 +0000 UTC m=+392.644512724 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf89b133-b239-422f-b721-2f03c096caeb-config-volume") pod "coredns-6799fbcd5-b2nl4" (UID: "bf89b133-b239-422f-b721-2f03c096caeb") : configmap references non-existent config key: NodeHosts
...
Feb 04 14:07:37 master k3s[2552]: E0204 14:07:37.245568    2552 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-6799fbcd5-b2nl4" podUID="bf89b133-b239-422f-b721-2f03c096caeb"
...
Feb 04 14:08:34 master k3s[2552]: time="2024-02-04T14:08:34Z" level=debug msg="Tunnel server egress proxy updating Node master IP 192.168.0.200/32"
...
Feb 04 14:08:34 master k3s[2552]: time="2024-02-04T14:08:34Z" level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
...

and here is pending coredns pod status (10 minutes was not enough to recover):

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS              RESTARTS   AGE
coredns-6799fbcd5-b2nl4                   0/1     ContainerCreating   0          10m
local-path-provisioner-84db5d44d9-7lnnh   1/1     Running             0          10m
metrics-server-67c658944b-6h227           1/1     Running             0          10m

here is a sample Vagrantfile which could help to reproduce the problem in isolation:

ENV_DEFAULTS = {
  INSTALL_K3S_SKIP_DOWNLOAD: "true",
  INSTALL_K3S_VERSION: "v1.28.5+k3s1",
  K3S_TOKEN: "43bf98d8fb25e7fd4275ae06f33adacd",
  IFACE_NAME: "enp0s8",
}

Vagrant.configure(2) do |config|
  config.vm.define 'master' do |master|
    master.vm.box = 'ubuntu/focal64'
    master.vm.hostname = 'master'
    master.vm.synced_folder '.', '/vagrant', type: 'virtualbox'
    master.vm.network 'private_network', ip: '192.168.0.200'
    master.vm.provider 'virtualbox' do |v|
      v.memory = 2048
      v.cpus = 2
      v.name = 'k3s-master-01'
      v.customize ['modifyvm', :id, '--audio', 'none']
      v.customize ["modifyvm", :id, "--ioapic", 'on']
    end
    master.vm.provision 'shell', env: ENV_DEFAULTS, inline: <<-'SHELL'
      curl -fsSL https://get.k3s.io -o install.sh
      curl -fsSL https://github.com/k3s-io/k3s/releases/download/$INSTALL_K3S_VERSION/k3s -o /usr/local/bin/k3s
      chmod +x /usr/local/bin/k3s
      mkdir -p /var/lib/rancher/k3s/agent/images
      curl -fsSL https://github.com/k3s-io/k3s/releases/download/$INSTALL_K3S_VERSION/k3s-airgap-images-amd64.tar.zst -o /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar.zst
      IPADDR="$(ip addr show ${IFACE_NAME} | grep 'inet ' | awk '{print $2;}' | cut -d'/' -f1)"
      export INSTALL_K3S_EXEC="server --node-ip=${IPADDR} --flannel-iface=${IFACE_NAME} --write-kubeconfig-mode=644 --kube-apiserver-arg=service-node-port-range=30000-30100 --secrets-encryption --disable=traefik --disable=servicelb --disable-helm-controller --disable-network-policy --kubelet-arg cloud-provider=external --debug"
      sh install.sh
      if [ -f /etc/rancher/k3s/k3s.yaml ]; then
        cp /etc/rancher/k3s/k3s.yaml /tmp/
        sed -i "s/127.0.0.1/${IPADDR}/" /tmp/k3s.yaml
        mkdir -p /home/vagrant/.kube
        cp /tmp/k3s.yaml /home/vagrant/.kube/config
        cp /tmp/k3s.yaml /vagrant/
        rm -f /tmp/k3s.yaml
        chown -R vagrant:vagrant /home/vagrant/.kube
      fi
    SHELL
  end
end

# vagrant up
# export KUBECONFIG="$(pwd)/k3s.yaml"
# kubectl get nodes
# kubectl get pods -n kube-system
# vagrant ssh

interesting observation, it works as expected if I'd comment out the part where the airgapped images being downloaded, this makes me think about the assumption regarding the race condition I made before

# mkdir -p /var/lib/rancher/k3s/agent/images
# curl -fsSL https://github.com/k3s-io/k3s/releases/download/$INSTALL_K3S_VERSION/k3s-airgap-images-amd64.tar.zst -o /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar.zst

@brandond
Copy link
Member

brandond commented Feb 5, 2024

Thanks for the steps! I can strip that down to the following shell commands, and reproduce the issue:

export INSTALL_K3S_VERSION="v1.29.1-rc2+k3s2"
mkdir -p /var/lib/rancher/k3s/agent/images
curl -fsSL https://github.com/k3s-io/k3s/releases/download/${INSTALL_K3S_VERSION}/k3s-airgap-images-amd64.tar.zst -o /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar.zst
curl -fsSL get.k3s.io | sh -s - --node-ip=172.17.0.7 --flannel-iface=eth0 --write-kubeconfig-mode=644 --kube-apiserver-arg=service-node-port-range=30000-30100 --secrets-encryption --disable=traefik --disable=servicelb --disable-helm-controller --disable-network-policy --kubelet-arg cloud-provider=external --debug

@brandond brandond reopened this Feb 5, 2024
@brandond brandond moved this from Done Issue to Peer Review in K3s Development Feb 5, 2024
@brandond brandond self-assigned this Feb 5, 2024
@brandond brandond added this to the v1.29.2+k3s1 milestone Feb 5, 2024
@brandond
Copy link
Member

brandond commented Feb 5, 2024

I think the key here is the startup being delayed a bit by the image import, combined with the helm controller being disabled. That results in the node controller running too late to trigger initialization of the configmap cache.

@fmoral2
Copy link
Contributor

fmoral2 commented Feb 15, 2024

Validated on Version:

-$ k3s version v1.29.1+k3s-de825845 (de825845)

 

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"

Cluster Configuration:
1 server node

Steps to validate the fix

  1. Install k3s from airgap releases
  2. Start k3s with those flags:
 --node-ip={internal-ip} --flannel-iface={proper-interface} --write-kubeconfig-mode=644 --kube-apiserver-arg=service-node-port-range=30000-30100 --secrets-encryption --disable=traefik --disable=servicelb --disable-helm-controller --disable-network-policy --kubelet-arg cloud-provider=external --debug
  1. Validate nodes and pods are all ok
  2. Validate that coredns is starting and running correctly

Reproduction Issue:


 
 export INSTALL_K3S_VERSION="v1.29.1-rc2+k3s2"
mkdir -p /var/lib/rancher/k3s/agent/images
curl -fsSL https://github.com/k3s-io/k3s/releases/download/${INSTALL_K3S_VERSION}/k3s-airgap-images-amd64.tar.zst -o /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar.zst
curl -fsSL get.k3s.io | sh -s - --node-ip=internal-ip --flannel-iface={proper-local-interface} --write-kubeconfig-mode=644 --kube-apiserver-arg=service-node-port-range=30000-30100 --secrets-encryption --disable=traefik --disable=servicelb --disable-helm-controller --disable-network-policy --kubelet-arg cloud-provider=external --debug

~$ kgn
NAME               STATUS   ROLES                  AGE   VERSION
ip-    Ready    control-plane,master   42s   v1.28.5+k3s1


$ journalctl -eu k3s |
 Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9cd2f346-b205-4a0c-92c4-24df86fa190f-config-volume") pod "coredns-6799fbcd5-x7md2" (UID: "9cd2f346-b205-4a0c-92c4-24df86fa190f") : configmap references non-existent config key: NodeHosts

~$ kgp -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   coredns-6799fbcd5-x7md2                   0/1     ContainerCreating   0          2m11s
kube-system   local-path-provisioner-84db5d44d9-s6g6h   1/1     Running             0          2m11s
kube-system   metrics-server-67c658944b-wjc7l           1/1     Running             0          2m11s


Validation Results:

      
 ~$ k3s -v
k3s version v1.29.1+k3s-de825845 (de825845)
go version go1.21.6


$ k3s kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
ip-   Ready    control-plane,master   20m   v1.29.1+k3s-de825845

 $ k3s kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS      AGE
kube-system   local-path-provisioner-6c86858495-dr6sc   1/1     Running   1 (45m ago)   50m
kube-system   coredns-6799fbcd5-cst6j                   1/1     Running   0             50m
kube-system   metrics-server-67c658944b-whd7k           1/1     Running   0             50m




@fmoral2 fmoral2 closed this as completed Feb 15, 2024
@github-project-automation github-project-automation bot moved this from To Test to Done Issue in K3s Development Feb 15, 2024
@InCogNiTo124

This comment was marked as off-topic.

@brandond
Copy link
Member

brandond commented May 6, 2024

@InCogNiTo124 please open a new issue describing your configuration using the issue template. It does not sound like the same problem.

@InCogNiTo124
Copy link

Hi, I have this happening in a fresh Hetzner Cloud VM with k3s version v1.29.4+k3s1 (94e29e2). Basically I have the same error message, and same behaviour as others in this thread. However, systemctl restart k3s.service doesn't help me.

After painful few months, I seem to have fixed it by disabling public ipv6 on a Hetzner instance. I'm finally successfully running something after v1.28.6+k3s2 (eg v1.30.6+k3s1 works)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

7 participants