Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc]: specify node visibility requirements in the doc #10927

Closed
dberardo-com opened this issue Sep 23, 2024 · 2 comments
Closed

[doc]: specify node visibility requirements in the doc #10927

dberardo-com opened this issue Sep 23, 2024 · 2 comments

Comments

@dberardo-com
Copy link

i could not find out fro the documentation alone, whether in an HA setup:

  • all server nodes must be able to see each others
  • all agent nodes must be able to see ALL other server and/or agent nodes

in particular in the second part: is it possible that one agent node only sees on server node ? and of course if that server node goes down, then the agent goes down with it, sure.

is it possible to achieve this? and in such case is ti possible to specify a list of preferred server nodes in the agent config file so that when it starts it does not go over all server ips waiting for the first one to respond ? this makes the reboot very long

@brandond
Copy link
Member

brandond commented Sep 24, 2024

A basic expectation of Kubernetes clusters is that all nodes should be able to communicate between each other. Pods may run on any node in the cluster, so for pod-pod and pod-service communication to work properly, you need full connectivity across all cluster members.

You can find information on required ports in the docs here: https://docs.k3s.io/installation/requirements#networking

@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Sep 24, 2024
@dberardo-com
Copy link
Author

Thanks for the clarification. Can this work if the nodes are on different subnets?

Say that all nodes are capable to see each other on the 172.* subnet, tun0 interface (via a VPN e.g.) but the cluster has been setup initially on another subnet (say 10.* On the eth0 interface). Is it possible for pods on 2nodes to reach each other if those nodes are in the same 172.* Subnet but not in the same 10.* ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants