-
Notifications
You must be signed in to change notification settings - Fork 275
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RKE2 Server listening on ipv6, but not ipv4 #4777
Comments
Thanks for creating the issue! Could you share your config for the server? Did you set node-ip or advertise-address or any other network config? |
Is Please confirm that you have both A and AAAA records for |
Hey!
the rancher server url has an A record as I'm just using ipv4. dig results:
|
Can you confirm that you can Can you confirm that you've opened that port on any firewalls between the two nodes, and disabled any local firewall (firewalld/ufw) on that server? |
from the first node(rancher1) I can do just curling 172.ipv4.of.rancher1:9345 works, but I receive a 404 from the node I'm trying to join it fails:
all firewalls between the two nodes (local/network) are currently disabled. all traffic should be open |
it's worth noting that I can curl https://rancher.server.url without the port from rancher2 (the node I'm trying to join) and I get a 404, so I can successfully reach the server, just not on port 9345 |
alright. I found the issue. It turns out that even though ufw was disabled, firewalld was running. not sure why it was setup this way. I was able to get past that, but the new node(rancher2) is now stuck at:
is this usually a long process? the first node only took a minute or so, and this has been going on loop for while |
It appears to be waiting on etcd to start. Check the etcd pod logs under /var/log/pods. Can you confirm that you've also opened all the etcd ports between the nodes? https://docs.rke2.io/install/requirements#inbound-network-rules |
I manually added all those rules to both nodes. the etcd log shows
|
On If that doesn't fix it, I would probably use kubectl to delete the |
I'm moving this to a discussion instead of an issue, as it is becoming clear that there were just some missed prerequisites, and there is not anything wrong with rke2. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Environmental Info:
RKE2 Version:
rke2 version v1.26.9+rke2r1 (368ba42666c9664d58bd0a9f7d3d13cd38f6267d) go version go1.20.8 X:boringcrypto
Node(s) CPU architecture, OS, and Version:
Linux Rancher1 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
Cluster Configuration:
Trying to register a second server node, after initializing the first
Describe the bug:
The First server node (
Rancher1
) is listening for new node registration on ipv6 only, and not ipv4.when I try to register the second node, I get:
Sep 20 09:03:25 Rancher2 rke2[16709]: time="2023-09-20T09:03:25-07:00" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://RANCHER.SERVER.URL:9345/cacerts\": dial tcp ip.address:9345: connect: no route to host"
I verified that the first node is only listening on ipv6:
trying to curl the cacerts via ipv4 manually fails, but if I run
curl -k https://localhost:9345/cacerts
from the first server, I get the cert, since it can route ipv6 with localhostSteps To Reproduce:
Expected behavior:
I would expect to be able to use ipv4 to register new nodes, as there is no ipv6 network between these servers
Actual behavior:
it seems I can only register new nodes via ipv6.
Additional context / logs:
It's entirely possible I'm missing some configuration setting, and rke2 is defaulting to ipv6 only, but I see nothing in either the rancher nor the rke2 docs.
I've been using these as reference:
https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher
https://docs.rke2.io/install/ha
The text was updated successfully, but these errors were encountered: