-
This is 99% my error but I hope there's a way out of it. I attempted to migrate my cluster from SQLlite to etcd in accordance with #3982 and initially this appeared to have worked. After a while I could run Then I tried adding a second master and this is where everything went wrong. The second master complained about bad TLS handshakes which I couldn't progress past. After rebooting nodes and restarting services it now appears my first master is broken. The K3S service is in a constant loop and I can't access kubectl commands as it returns Trying to run the whole server command again, even after stopping the K3S service, results in this alarming error I ddin't make this an issue because this is likely my fault and not something that needs fixing. What can I do from here? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Can you provide the k3s service logs, going back to the original startup after adding the --cluster-init flag? At the very least you should be able to roll back to the old sqlite datastore; the file is still there - you'd just need to delete the etcd db directory, remove the cli flag, and rename the sqlite database back into place. |
Beta Was this translation helpful? Give feedback.
Can you provide the k3s service logs, going back to the original startup after adding the --cluster-init flag?
At the very least you should be able to roll back to the old sqlite datastore; the file is still there - you'd just need to delete the etcd db directory, remove the cli flag, and rename the sqlite database back into place.