Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RKE 2 client error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1: #3275

Closed
Azbest7812 opened this issue Aug 29, 2022 · 19 comments

Comments

@Azbest7812
Copy link

Azbest7812 commented Aug 29, 2022

Hi,

The following problem occurred while trying to start rk2 agent:

level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:

this is my config file:

root@azbest:~# cat /etc/rancher/rke2/config.yaml
server: https://127.0.0.1:9345
token: K10fdc50334262abad14458c4baf4ceb6f4136d102fcf05ea182bba42cb709ce331::server:f9f5ab06303b89cd4a06f1a8067b4621

this is token:

root@azbest:~# cat /var/lib/rancher/rke2/server/node-token
K10fdc50334262abad14458c4baf4ceb6f4136d102fcf05ea182bba42cb709ce331::server:f9f5ab06303b89cd4a06f1a8067b4621

I tried to use my host IP instead of 127.0.0.1 in config.yaml but log error is still teh same with 127.0.0.1

And this is my rke2.yaml:

root@azbest:~# cat /etc/rancher/rke2/rke2.yaml
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpZeE56YzJOVFV6TUI0WERUSXlNRGd5T1RFeU16VTFNMW9YRFRNeU1EZ3lOakV5TXpVMQpNMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZMk1UYzNOalUxTXpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJFSDZnbGtEY2V2OGpueTI3VFppUVVibmhuMEFYc081SUJJeXRnTC8KRkpwY1FZdjVEYS9vNG5yQlEzbDhPRXVXQXJ4MEdObU9FR3pIMk14Rk85am1aemFqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUWMzbGRwRTJIbCs0Umo3Y1NuCmQveUZrbkhUbnpBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlFQTRCd0xYeXJtajZvait3SERETmpxTmlaNURQT0IKNnhUNThONVJNekpYL1U4Q0lEeHhaWnVWQjAzdWpPQk00S2VLOHVhWWpHaTd1RTRBVUF2cUJtbXZHS01oCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://127.0.0.1:6443
    name: default
    contexts:
  • context:
    cluster: default
    user: default
    name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
  • name: default
    user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrekNDQVRpZ0F3SUJBZ0lJUGtHR0dlS2R5OU13Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkyTVRjM05qVTFNekFlRncweU1qQTRNamt4TWpNMU5UTmFGdzB5TXpBNApNamt4TWpNMU5UTmFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBU1NOK21TZCt6QktYM2wKQ3prZVk5NjNLSjZMYXBoeFhPeVQvMlk3K3pzM0M5K1BiQVg3dVNiUFFFZUg0WGt2eDBiWWQwcTBuMEE4T1NKYQpTUlJ1TFlZNm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVjK1k3RnQyaUR3VkkwbmIzWXFCSXNZajNQYnd3Q2dZSUtvWkl6ajBFQXdJRFNRQXcKUmdJaEFLYUdCcFJaTFZhd3lQajdCOUx4TUZKcjFmb0VLc3V4dUZwdHlYUktrN0xuQWlFQTZCeXRCdS91VTBPQwpGazhnUmpuK1pOQllnSURUYW1ScFVYbytaN1dUT0t3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZWpDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qWXhOemMyTlRVek1CNFhEVEl5TURneU9URXlNelUxTTFvWERUTXlNRGd5TmpFeU16VTEKTTFvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTJNVGMzTmpVMU16QlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCTzdOSjUrcnhiTitLUUpWd1hXVjIrOE5XMXlXL0kramcvcnl4emtKCkhtNlBFMGdIZktYWVA2Q3FvZThXbzROVzRPSldTVHRPOHIvMHF5ZTYxb29nR2wyalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlJ6NWpzVzNhSVBCVWpTZHZkaQpvRWl4aVBjOXZEQUtCZ2dxaGtqT1BRUURBZ05KQURCR0FpRUFoTFlVYUMvMktmOHZiVnBpZzhLS1ppSTBKQWRJCmMza1ordHJzRERWa3lCY0NJUURtRnZYRmZuL284K1dNRDBYMDFRWi9kRkJUZVFvNFZQTEF3NDE0ZXYrY05BPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUNSWXhjTUlJU2QrMWxwL2ovc0xuV1dDYzdTOUVXcmlGOWhkQ2FZOE5ZNzNvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFa2pmcGtuZnN3U2w5NVFzNUhtUGV0eWllaTJxWWNWenNrLzltTy9zN053dmZqMndGKzdrbQp6MEJIaCtGNUw4ZEcySGRLdEo5QVBEa2lXa2tVYmkyR09nPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

root@azbest:~# netstat -nl | grep 6444
tcp 0 0 127.0.0.1:6444 0.0.0.0:* LISTEN

Any ideas, hints.

Regards,

Azbest

@brandond
Copy link
Member

Why are you trying to use localhost (127.0.0.1) as the server? This should be the address of your server node.

server: https://127.0.0.1:9345/
token: K10fdc50334262abad14458c4baf4ceb6f4136d102fcf05ea182bba42cb709ce331::server:f9f5ab06303b89cd4a06f1a8067b4621

@Azbest7812
Copy link
Author

I tried to use my host IP instead of 127.0.0.1 in config.yaml but log error is still teh same with 127.0.0.1

@brandond
Copy link
Member

brandond commented Aug 29, 2022

Can you successfully curl -vks https://SERVERIP:9345/ping from the agent?

@Azbest7812
Copy link
Author

no, I cannot:

root@azbest:~# curl -vks https://192.168.66.128:9345/ping

  • Trying 192.168.66.128:9345...
  • TCP_NODELAY set
  • connect to 192.168.66.128 port 9345 failed: Connection refused
  • Failed to connect to 192.168.66.128 port 9345: Connection refused
  • Closing connection 0

@Azbest7812
Copy link
Author

Azbest7812 commented Aug 29, 2022

but there is no listener for 9354

@brandond
Copy link
Member

Are you sure RKE2 is running on your server?

@Azbest7812
Copy link
Author

after restart rke service:

root@azbest:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-08-29 18:09:51 UTC; 3s ago
Docs: https://github.com/rancher/rke2#readme
Process: 48828 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 48830 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 48831 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 48837 (rke2)
Tasks: 173
Memory: 1.8G
CGroup: /system.slice/rke2-server.service
├─ 2488 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id cbdb299e1dd55c37170af15c68fd8a56302d1a56da6580baae7ba928bb3ef9aa -address /run>
├─ 2607 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 13c24f66248b17e446e25bdf22bb83ede4c2264a75377c10701f7f2678fcf33d -address /run>
├─ 2726 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 74d6bc0c1df697d73f86ff21dfd6c8d213c3e61ea4b2cd34323b6e3527f676c2 -address /run>
├─ 2737 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 09bd362b059de7e50acfb0ac9ba73fe5f71fbc67a7a2f5ae7c624799d4a9c191 -address /run>
├─ 2898 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 483c0b871f5061e21926a89596727eef13e786a1c84ffc01d3c13afb408b2f0a -address /run>
├─ 2984 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id a3380705f7a4ad629e3f5d59953271d08f447156c11fd0fa615135c33a741d44 -address /run>
├─ 3528 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 28bb32f2048593748dceaa1989a4d652883872e74c93f55cd3780d47fe22c9d7 -address /run>
├─ 4552 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id e4a99e9b48b2913247b358feafdb0c62c476bccdf484a1bc66ed7d09acf3b0d3 -address /run>
├─ 4568 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 224786cda4f1d1c1c80ac197b4539b96729d3edc616109e822e8e90d7ca584c1 -address /run>
├─ 5092 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id dbb9cc46a8cc735f5eed7abe02aee727e1ac1eb31d949dc38664bdcbb2333d9c -address /run>
├─ 5835 /var/lib/rancher/rke2/data/v1.23.9-rke2r1-eef53a0d1ec2/bin/containerd-shim-runc-v2 -namespace k8s.io -id c588f120427634a3f448bac2f20fc8f614ba23765df5eac896fa668e00856eb7 -address /run>
├─48837 /usr/local/bin/rke2 server
├─48875 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
└─48891 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentica>

Aug 29 18:09:52 adam rke2[48837]: time="2022-08-29T18:09:52Z" level=info msg="Event(v1.ObjectReference{Kind:"HelmChart", Namespace:"kube-system", Name:"rke2-coredns", UID:"21f83e49-4f13-431f-bcba-9>
Aug 29 18:09:52 adam rke2[48837]: time="2022-08-29T18:09:52Z" level=info msg="Event(v1.ObjectReference{Kind:"HelmChart", Namespace:"kube-system", Name:"rke2-ingress-nginx", UID:"8f7976f7-275d-4e15->
Aug 29 18:09:52 adam rke2[48837]: time="2022-08-29T18:09:52Z" level=info msg="Event(v1.ObjectReference{Kind:"HelmChart", Namespace:"kube-system", Name:"rke2-metrics-server", UID:"6161c9d9-72ac-469b>
Aug 29 18:09:52 adam rke2[48837]: time="2022-08-29T18:09:52Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-coredns", UID:"482c98ca-67b5-453c-8680-f138a>
Aug 29 18:09:52 adam rke2[48837]: time="2022-08-29T18:09:52Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-ingress-nginx", UID:"e57b7886-acde-496f-b635>
Aug 29 18:09:53 adam rke2[48837]: time="2022-08-29T18:09:53Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-ingress-nginx", UID:"e57b7886-acde-496f-b635>
Aug 29 18:09:53 adam rke2[48837]: time="2022-08-29T18:09:53Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-metrics-server", UID:"cc276ec1-0a4f-44a1-80d>
Aug 29 18:09:53 adam rke2[48837]: time="2022-08-29T18:09:53Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-metrics-server", UID:"cc276ec1-0a4f-44a1-80d>
Aug 29 18:09:53 adam rke2[48837]: time="2022-08-29T18:09:53Z" level=info msg="Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rke2-multus", UID:"", APIVersion:"k3s.cattle.io>
Aug 29 18:09:54 adam rke2[48837]: time="2022-08-29T18:09:54Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tc>
root@azbest:~# curl -vks https://192.168.66.128:9345/ping

  • Trying 192.168.66.128:9345...
  • TCP_NODELAY set
  • Connected to 192.168.66.128 (192.168.66.128) port 9345 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/certs/ca-certificates.crt
    CApath: /etc/ssl/certs
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • TLSv1.3 (IN), TLS handshake, Server hello (2):
  • TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
  • TLSv1.3 (IN), TLS handshake, Request CERT (13):
  • TLSv1.3 (IN), TLS handshake, Certificate (11):
  • TLSv1.3 (IN), TLS handshake, CERT verify (15):
  • TLSv1.3 (IN), TLS handshake, Finished (20):
  • TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.3 (OUT), TLS handshake, Certificate (11):
  • TLSv1.3 (OUT), TLS handshake, Finished (20):
  • SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
  • ALPN, server accepted to use h2
  • Server certificate:
  • subject: O=rke2; CN=rke2
  • start date: Aug 29 12:35:53 2022 GMT
  • expire date: Aug 29 12:35:53 2023 GMT
  • issuer: CN=rke2-server-ca@1661776553
  • SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
  • Using HTTP2, server supports multi-use
  • Connection state changed (HTTP/2 confirmed)
  • Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  • Using Stream ID: 1 (easy handle 0x56172e6892f0)

GET /ping HTTP/2
Host: 192.168.66.128:9345
user-agent: curl/7.68.0
accept: /

  • TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
  • Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
    < HTTP/2 200
    < content-type: text/plain
    < content-length: 4
    < date: Mon, 29 Aug 2022 18:10:16 GMT
    <
  • Connection #0 to host 192.168.66.128 left intact

@Azbest7812
Copy link
Author

trying to run rke2 agent and it's hang:

root@azbest:~# systemctl start rke2-agent.service

and still plenty:

error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:

@Azbest7812
Copy link
Author

this is the output for journalctl -u rke2-agent -f when root@azbest:~# systemctl start rke2-agent.service

Aug 29 18:20:38 azbest rke2[50195]: time="2022-08-29T18:20:38Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56500->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:40 azbest rke2[50195]: time="2022-08-29T18:20:40Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56512->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:42 azbest rke2[50195]: time="2022-08-29T18:20:42Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56524->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:44 azbest rke2[50195]: time="2022-08-29T18:20:44Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56396->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:46 azbest rke2[50195]: time="2022-08-29T18:20:46Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56408->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:48 azbest rke2[50195]: time="2022-08-29T18:20:48Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56420->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:50 azbest rke2[50195]: time="2022-08-29T18:20:50Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56432->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:52 azbest rke2[50195]: time="2022-08-29T18:20:52Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56444->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:54 azbest rke2[50195]: time="2022-08-29T18:20:54Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": EOF"
Aug 29 18:20:56 azbest rke2[50195]: time="2022-08-29T18:20:56Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:50654->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:20:58 azbest rke2[50195]: time="2022-08-29T18:20:58Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:50666->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:21:00 azbest rke2[50195]: time="2022-08-29T18:21:00Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": EOF"
Aug 29 18:21:02 azbest rke2[50195]: time="2022-08-29T18:21:02Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:50690->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:21:04 azbest rke2[50195]: time="2022-08-29T18:21:04Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:52378->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:21:06 azbest rke2[50195]: time="2022-08-29T18:21:06Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:52390->127.0.0.1:6444: read: connection reset by peer"
Aug 29 18:21:08 azbest rke2[50195]: time="2022-08-29T18:21:08Z" level=error msg="failed to get CA certs: Get "https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:52402->127.0.0.1:6444: read: connection res

@brandond
Copy link
Member

brandond commented Aug 29, 2022

OK, so the server is working fine, but the agent gets a "connection reset by peer" error when it attempts to connect to it. It sounds like you have a firewall or something else blocking the connection.

@brandond
Copy link
Member

You're not trying to run the agent and server on the same node, are you?

@Azbest7812
Copy link
Author

Yes, I am trying to run the agent and the server on the same node.

@Azbest7812
Copy link
Author

root@azbest:~# systemctl status ufw
● ufw.service - Uncomplicated firewall
Loaded: loaded (/lib/systemd/system/ufw.service; disabled; vendor preset: enabled)
Active: active (exited) since Mon 2022-08-29 12:30:33 UTC; 6h ago
Docs: man:ufw(8)
Main PID: 515 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4575)
Memory: 0B
CGroup: /system.slice/ufw.service

@brandond
Copy link
Member

That is not supported. The server also functions as an agent; you can't have RKE2 running twice on the same node.

@Azbest7812
Copy link
Author

Thank you brandond for your help !

@juanmoyano
Copy link

Hi @Azbest7812, I'm having the same issue as you. Did you find any solution?

@daixixidai
Copy link

Hi @Azbest7812, I'm having the same issue as you. Did you find any solution?

look at the solution upon,rke2-server an rke2-agent can't be installed on the same node,you can install the rke2-agent on another machine,then the errors will be resolved. This is an important caution,I don't know why this isn't mentioned in the
instruction document.

@brandond
Copy link
Member

brandond commented Dec 9, 2024

Why would you even try to do that? You can't run multiple copies of Kubernetes on one node.

@daixixidai
Copy link

Why would you even try to do that? You can't run multiple copies of Kubernetes on one node.

Because I'm learning this software and lack of computer, so I tried to run many cases on the same machine,then encountered the errors.Thanks for this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants