Replies: 1 comment 2 replies
-
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Background
Hi team,
My RKE2 version is v1.26.13+rke2r1.
We have deployed RKE2 nodes on our metal machines. Each machine has two physical network cards, both running at 10Gbps speed. On Ubuntu, they are labeled as eno1 and eno2. Among them, eno1 is our public network address, and eno2 is our internal network address(use 10.70.0.0/16).
Now, there is an issue. When my worker nodes join the master, they write the master's eno2 address. But I suspect they are communicating with each other (to note, we are using Longhore) using the eno1 network card(I'm not sure if my understanding is correct.). This is causing higher overhead for us.
I found through "describe node" that the addresses are all eno1's IPv4 and IPv6 addresses, identified as "Internal." How can I change this? Are there any relevant settings?
I've looked up relevant information and found that both server and worker have two configuration options(
node-ip
,node-external-ip
), but I'm a bit confused about this.If both node-ip and node-external-ip are set in the configuration, is there a priority between them?
network audit
I used nmap to scan our cluster using public IP addresses and was surprised to find that many services are exposed to the public network. I believe this is unnecessary, such as etcd, kubelet, and rke2 supv.
How can I shut them down? Do I just need to set the node-ip to the eno2 address?
In addition, after modifying the configuration, how do I smoothly update it? Do I need to evict pods to other nodes? This workload seems quite large. Or is it sufficient to just use systemctl restart rke2-server/worker?
Beta Was this translation helpful? Give feedback.
All reactions