Replies: 1 comment
-
The server that was started with --cluster-init is not special. It is just the first node on the cluster. Once the etcd datastore has been created on a node (regardless of whether it was initialized on that node, or it joined an existing cluster), all server nodes function identically. You didn't say how many nods you have, or if the rest of your nodes are servers, or agents. If you have at least 3 server nodes, you should be able to stop any one of them and the remainder will still have quorum and can continue to operate. If you're using the kubeconfig from outside the cluster, note that it will need to point to one of the servers - or to an external load-balancer, if you set one up in front of your apiserver. If you are using a kubeconfig that points to one server, and that server is down, you obviously will not be able to use kubectl until you edit the config file to point at a different server. Agents handle this by running a client-side load-balancer that maintains connections to all the servers, so that they can fail over if one goes down. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I'm a little bit confused with --cluster-init option. As same as other, I created a cluster with the first server with an option "--cluster-init" and other nodes pointed to that node. Since the rest of cluster nodes are pointed to the first server with "--server https://first_server:6443", whenever I stop k3s service on the first node, cannot access cluster resource from everywhere. Is that expected behavior? or do I need to change k3s.service on the nodes to set "--server https://unique_server_name:6443"?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions