-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubeadm with unstable deb repo #141
Comments
With the 4 RPi jointed into the cluster, load average on master side is between 5 and 8. |
Interesting, which component is taking the most load? |
Seems it starts to lower slowly... From htop, in terms of CPU (with no special order)
I'm now back to a load average of 2.11... just passed to 1.18... still lowering. So seems the init phase was quite ressource heavy. Will let the cluster as is to see till where it goes before doing any new action. |
... but it increases or lowers... workers were quite calm so far, most are still (Load average 0 < LA < 2 for 1 RPI3 and 2 RPI2), but one among them is quite active with LA up to 4.45 (RPI3) |
Some findings ; if you want output of special commands, just gave them to me :)
|
Seems there are still some etcd issues:
but:
and:
|
After 1 day with no activities on it:
|
Hi, Till a later release, I'll use my 5 RPi to build a simple docker-swarm cluster and my 2 cubietrucks with HD as a glusterFS endpoint. Aside of it, I'll test K8S on 64 bits infra to be more famillair with first and then come back to ARM version ;-) |
Following #140, my findings:
Context:
On a worker node:
And when I create the weave-kube (after I joined a first node btw) :
And then:
So maybe I didn"t wait enough for etcd at the first output above ?
Let me know if you need something else.
The text was updated successfully, but these errors were encountered: