Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release-1.28] - Secondary etcd-only nodes do not reconnect to apiserver after outage if joined against an etcd-only node #11323

Closed
brandond opened this issue Nov 14, 2024 · 1 comment
Assignees
Milestone

Comments

@brandond
Copy link
Member

Backport fix for Secondary etcd-only nodes do not reconnect to apiserver after outage if joined against an etcd-only node

@aganesh-suse
Copy link

Validated on release-1.28 branch with commit 2d0661e

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04 LTS"

$ uname -m
x86_64

Cluster Configuration:

HA: 3 server/ 1 agent - 2 etcd, 1 cp, 1 agent node config.
or 
3 etcd, 2 cp, 1 agent configuration.

P.S - all nodes pointing to the main/first etcd server

Config.yaml:

etcd only node config.yaml:

token: xxxx
disable-apiserver: true
disable-controller-manager: true
disable-scheduler: true
node-taint:
- node-role.kubernetes.io/etcd:NoExecute
cluster-init: true
write-kubeconfig-mode: "0644"
node-external-ip: 1.1.1.1
node-label:
- k3s-upgrade=server
debug: true

Control plane only node config.yaml:

$ cat /etc/rancher/k3s/config.yaml 
token: xxxx
server: https://1.1.1.1:6443
disable-etcd: true
node-taint:
- node-role.kubernetes.io/control-plane:NoSchedule
write-kubeconfig-mode: "0644"
node-external-ip: 2.2.2.2
node-label:
- k3s-upgrade=server
debug: true

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/k3s && sudo cp config.yaml /etc/rancher/k3s
  1. Install k3s
curl -sfL https://get.k3s.io | sudo INSTALL_K3S_COMMIT='2d0661e3a534b204280c6e047719b783c99e177f' sh -s - server
  1. Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
  1. Restart control plane node. Wait for 5 minutes.
  2. Verify cluster status - that all nodes are in Ready state:
kubectl get nodes -o wide

Replication Results:

  • k3s version used for replication:
$ k3s -v 
k3s version v1.28.15+k3s1 (869dd4d6)
go version go1.22.8
$ kubectl get nodes
time="2024-11-15T04:46:11Z" level=debug msg="Asset dir /var/lib/rancher/k3s/data/ef4db44d834f98f27bc2d7cf944dcac51055966a52fe848f16ba44caa1202f89"
time="2024-11-15T04:46:11Z" level=debug msg="Running /var/lib/rancher/k3s/data/ef4db44d834f98f27bc2d7cf944dcac51055966a52fe848f16ba44caa1202f89/bin/kubectl [kubectl get nodes]"
NAME               STATUS     ROLES                  AGE   VERSION
ip-172-31-0-37     NotReady   <none>                 13m   v1.28.15+k3s1
ip-172-31-12-163   Ready      etcd                   15m   v1.28.15+k3s1
ip-172-31-4-1      Ready      control-plane,master   15m   v1.28.15+k3s1
ip-172-31-9-130    NotReady   etcd                   15m   v1.28.15+k3s1

Validation Results:

  • k3s version used for validation:
$ k3s -v 
k3s version v1.28.15+k3s-2d0661e3 (2d0661e3)
go version go1.22.8
$ sudo /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get nodes 
time="2024-11-15T05:17:07Z" level=debug msg="Asset dir /var/lib/rancher/k3s/data/94db4d0d3f49b472d61882c8de801d81d8245846286d9919ecfbbfd7bae8dedf"
time="2024-11-15T05:17:07Z" level=debug msg="Running /var/lib/rancher/k3s/data/94db4d0d3f49b472d61882c8de801d81d8245846286d9919ecfbbfd7bae8dedf/bin/kubectl [kubectl get nodes]"
NAME               STATUS   ROLES                  AGE   VERSION
ip-172-31-14-14    Ready    <none>                 13m   v1.28.15+k3s-2d0661e3
ip-172-31-15-153   Ready    control-plane,master   14m   v1.28.15+k3s-2d0661e3
ip-172-31-15-186   Ready    etcd                   14m   v1.28.15+k3s-2d0661e3
ip-172-31-7-89     Ready    etcd                   14m   v1.28.15+k3s-2d0661e3

@github-project-automation github-project-automation bot moved this from To Test to Done Issue in K3s Development Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

3 participants