Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Rancher Pods on the same worker node #1

Open
dmlabs opened this issue Jan 29, 2021 · 2 comments
Open

Multiple Rancher Pods on the same worker node #1

dmlabs opened this issue Jan 29, 2021 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@dmlabs
Copy link
Contributor

dmlabs commented Jan 29, 2021

Since Rancher v2.5 it is not possible any more to run more than 1 Rancher Pod on the same worker node.
I will provide more information as soon as I have more time to investigate.

@dmlabs dmlabs added the bug Something isn't working label Jan 29, 2021
@dmlabs
Copy link
Contributor Author

dmlabs commented Mar 11, 2021

For most efficient resource utilization we plan to run the multiple Rancher Server instances on the same Kubernetes cluster.
So it is not Rancher HA and not Rancher single node but a combination of these two official Rancher installation modes.
We want one Kubernetes cluster to host multiple independent Rancher instances, each with its own embedded etcd store like they were running on as Rancher single nodes.
To achieve that we start the Rancher Pods with the --k8s-mode argument set to "embedded".
This picture explains our idea:
rancher-saas

That worked perfectly fine with Rancher v2.4 but we have problems to run more than one Rancher Pods on the same Kubernetes Worker node with Rancher v2.5.
The first Rancher Pod runs without any issues until we start a second Rancher Pod on the same Kubernetes Worker node.
Then both Rancher Pods on that Kubernetes node crash and restart repeatedly with this error message: [FATAL] k3s exited with: exit status 255
I assume that it has something to do with the new requirement in Rancher v2.5 to run in privileged mode and the two Rancher Pods then interfere with each other on the Kubernetes node host network level.
There are attached two files to this issue containing the logs of two Rancher Pods v2.5 on the same Kubernetes node.

Now the questions are:

  • Is there a way/solution now for Rancher v2.5 to start multiple Rancher Pods on the same Kubernetes Node with --k8s-mode=embedded?
  • Will there be a way/solution in the future for Rancher v2.5 and later to to start multiple Rancher Pods on the same Kubernetes Node with --k8s-mode=embedded?

onzack-rancher-saas-statefulset1-rancher-0.log
onzack-rancher-saas-statefulset2-rancher-0.log

@dmlabs dmlabs self-assigned this Mar 11, 2021
@dmlabs
Copy link
Contributor Author

dmlabs commented May 11, 2021

Noticed a difference between the official Rancher Single Node installation and our Rancher SaaS Pods in the content of /var/lib/rancher.

Official Rancher Single Node installation:

ls -ahl /var/lib/rancher
total 28K
drwxr-xr-x 5 rancher root 4.0K May  6 18:34 .
drwxr-xr-x 1 root    root 4.0K May  6 21:58 ..
-rw-r--r-- 1 root    root    1 May  6 18:34 .existing
drwxr-xr-x 3 rancher root 4.0K May 10 00:00 etcd
drwxr-xr-x 5 root    root 4.0K May  6 18:32 k3s
drwx------ 6 root    root 4.0K May  6 18:49 management-state

Rancher SaaS

ls -ahl /var/lib/rancher
total 28K
drwxr-xr-x 5 rancher root 4.0K May  6 18:34 .
drwxr-xr-x 1 root    root 4.0K May  6 21:58 ..
-rw-r--r-- 1 root    root    1 May  6 18:34 .existing
drwxr-xr-x 5 root    root 4.0K May  6 18:32 k3s
drwx------ 6 root    root 4.0K May  6 18:49 management-state

The etcd directory is missing and this could be an indication that the etcd is not contained in the Rancher SaaS pod as expected with the k8s-mode=emeded setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant