Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing token / agent-token required for security restart log needs update #4890

Closed
ShylajaDevadiga opened this issue Oct 16, 2023 · 2 comments
Assignees
Labels
kind/bug Something isn't working

Comments

@ShylajaDevadiga
Copy link
Contributor

Environmental Info:
RKE2 Version:
Commit: 45c2122

Node(s) CPU architecture, OS, and Version:
Ubuntu 22.04

Cluster Configuration:
Single or multi node

Describe the bug:
Log message on the console output need to be updated to restart rke2 instead of restart k3s

$ rke2 token rotate --token token1 --new-token=token2
WARNING: Recommended to keep a record of the old token. If restoring from a snapshot, you must use the token associated with that snapshot.
WARN[0000] Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation.
Token rotated, restart k3s with new token

Steps To Reproduce:

  1. Install rke2
  2. run rke2 token rotate --token token1 --new-token=token2 to change the server token

Expected behavior:
Token rotated, restart rke2 with new token

Actual behavior:
Token rotated, restart k3s with new token

Additional context / logs:

@ShylajaDevadiga ShylajaDevadiga added the kind/bug Something isn't working label Oct 16, 2023
@ShylajaDevadiga ShylajaDevadiga added this to the v1.28.3+rke2r1 milestone Oct 16, 2023
@dereknola
Copy link
Member

/backport v1.25.15+rke2r1 v1.26.10+rke2r1 v1.27.7+rke2r1

@ShylajaDevadiga
Copy link
Contributor Author

Validated on rke2 version v1.28.3-rc2+rke2r1

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
Ubuntu 22.04

Cluster Configuration:
3 server 1 agent

Config.yaml:

$ cat /etc/rancher/rke2/config.yaml 
write-kubeconfig-mode: "0644"
token: token1
node-external-ip: <IP>

Steps to reproduce the issue and validate the fix

  1. Copy config.yaml
  2. Install rke2
  3. As non-root user rke2 token rotate --token token1 --new-token=token2
  4. Update config.yaml with new token
  5. Restart rke2 service on all nodes
  6. Reboot all nodes
  7. Verify token is updated on every node, cluster is up and pods are in running state

Validated the log message on the console output displays correct distro

rke2 token rotate --token token1 --new-token=token2
WARNING: Recommended to keep a record of the old token. If restoring from a snapshot, you must use the token associated with that snapshot.
Token rotated, restart rke2 nodes with new token

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants