You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.4
Containerd v1.7.3
Scenario: 1 master and 2 workers cluster
The weave-net has been installed by running below command on the master as described here :-
weave version v2.8.1
$ uname -a
Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-83-generic x86_64) on all 3 machines
$ kubectl version
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.4
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Starting node/masternode Starting kubelet.
24m Warning InvalidDiskCapacity node/masternode invalid capacity 0 on image filesystem
24m Normal NodeHasSufficientMemory node/masternode Node masternode status is now: NodeHasSufficientMemory
24m Normal NodeHasNoDiskPressure node/masternode Node masternode status is now: NodeHasNoDiskPressure
24m Normal NodeHasSufficientPID node/masternode Node masternode status is now: NodeHasSufficientPID
24m Normal NodeAllocatableEnforced node/masternode Updated Node Allocatable limit across pods
23m Normal Starting node/masternode
23m Normal RegisteredNode node/masternode Node masternode event: Registered Node masternode in Controller
9m4s Normal SandboxChanged pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Pod sandbox changed, it will be killed and re-created.
18m Normal Killing pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Stopping container nginx
34m Warning BackOff pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Back-off restarting failed container nginx in pod nginx-sbx-deployment-5d8888c5ff-4sk4v_default(ad4902b0-f978-4088-a610-100d78e490bd)
4m Warning FailedKillPod pod/nginx-sbx-deployment-5d8888c5ff-4sk4v error killing pod: failed to "KillPodSandbox" for "ad4902b0-f978-4088-a610-100d78e490bd" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4b3b18e1d2dd8888bbc3b031d9fbb2fbe5f71200c6590dd1bd8b40c549fd3a8\": plugin type=\"weave-net\" name=\"weave\" failed (delete): Delete \"http://127.0.0.1:6784/ip/e4b3b18e1d2dd8888bbc3b031d9fbb2fbe5f71200c6590dd1bd8b40c549fd3a8\": dial tcp 127.0.0.1:6784: connect: connection refused"
58m Normal Pulled pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Container image "nginx:1.24" already present on machine
12m Normal Killing pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Stopping container nginx
8m17s Normal SandboxChanged pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Pod sandbox changed, it will be killed and re-created.
83m Warning BackOff pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Back-off restarting failed container nginx in pod nginx-sbx-deployment-5d8888c5ff-fj2jq_default(fea7e842-cb7e-42a9-a452-9438451710b4)
3m12s Warning FailedKillPod pod/nginx-sbx-deployment-5d8888c5ff-fj2jq error killing pod: failed to "KillPodSandbox" for "fea7e842-cb7e-42a9-a452-9438451710b4" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"248dfef27a109fdbb7ed87dbec75919b6e94da7a4e3a1e1c0db0db24249c54e1\": plugin type=\"weave-net\" name=\"weave\" failed (delete): Delete \"http://127.0.0.1:6784/ip/248dfef27a109fdbb7ed87dbec75919b6e94da7a4e3a1e1c0db0db24249c54e1\": dial tcp 127.0.0.1:6784: connect: connection refused"
3m40s Normal Scheduled pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Successfully assigned default/nginx2-sbx-deployment-5d8888c5ff-5vgdf to nodea
3m40s Warning FailedCreatePodSandBox pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e42305556b393685737551d0001539b1d336a35466caf40e2c703a887e8562a6": plugin type="weave-net" name="weave" failed (add): unable to allocate IP address: Post "http://127.0.0.1:6784/ip/e42305556b393685737551d0001539b1d336a35466caf40e2c703a887e8562a6": dial tcp 127.0.0.1:6784: connect: connection refused
6s Normal SandboxChanged pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Pod sandbox changed, it will be killed and re-created.
2m45s Normal Pulled pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Container image "nginx:1.24" already present on machine
2m45s Normal Created pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Created container nginx
2m45s Normal Started pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Started container nginx
32s Normal Killing pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Stopping container nginx
3m41s Normal Scheduled pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Successfully assigned default/nginx2-sbx-deployment-5d8888c5ff-gqngv to nodeb
3m40s Warning FailedCreatePodSandBox pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "713ed7fc025a5dac1ebd36fb05fe22c1201539d5c734f5373cf512f7d22369ff": plugin type="weave-net" name="weave" failed (add): unable to allocate IP address: Post "http://127.0.0.1:6784/ip/713ed7fc025a5dac1ebd36fb05fe22c1201539d5c734f5373cf512f7d22369ff": dial tcp 127.0.0.1:6784: connect: connection refused
2m21s Normal SandboxChanged pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Pod sandbox changed, it will be killed and re-created.
2m21s Normal Pulled pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Container image "nginx:1.24" already present on machine
2m21s Normal Created pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Created container nginx
2m21s Normal Started pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Started container nginx
3m41s Normal SuccessfulCreate replicaset/nginx2-sbx-deployment-5d8888c5ff Created pod: nginx2-sbx-deployment-5d8888c5ff-gqngv
3m41s Normal SuccessfulCreate replicaset/nginx2-sbx-deployment-5d8888c5ff Created pod: nginx2-sbx-deployment-5d8888c5ff-5vgdf
3m41s Normal ScalingReplicaSet deployment/nginx2-sbx-deployment Scaled up replica set nginx2-sbx-deployment-5d8888c5ff to 2
77m Normal Starting node/nodea
75m Normal Starting node/nodea
73m Normal Starting node/nodea
71m Normal Starting node/nodea
67m Normal Starting node/nodea
60m Normal Starting node/nodea
52m Normal Starting node/nodea
46m Normal Starting node/nodea
40m Normal Starting node/nodea
29m Normal Starting node/nodea
23m Normal RegisteredNode node/nodea Node nodea event: Registered Node nodea in Controller
22m Normal Starting node/nodea
20m Normal Starting node/nodea
19m Normal Starting node/nodea
14m Normal Starting node/nodea
9m49s Normal Starting node/nodea
51s Normal Starting node/nodea
82m Normal Starting node/nodeb
76m Normal Starting node/nodeb
69m Normal Starting node/nodeb
63m Normal Starting node/nodeb
57m Normal Starting node/nodeb
50m Normal Starting node/nodeb
44m Normal Starting node/nodeb
37m Normal Starting node/nodeb
31m Normal Starting node/nodeb
23m Normal RegisteredNode node/nodeb Node nodeb event: Registered Node nodeb in Controller
19m Normal Starting node/nodeb
13m Normal Starting node/nodeb
7m33s Normal Starting node/nodeb
2m20s Normal Starting node/nodeb
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
What you expected to happen?
im using weave-net and the kubectl version is
Containerd v1.7.3
Scenario: 1 master and 2 workers cluster
The weave-net has been installed by running below command on the master as described here :-
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
What happened?
The pods on worker node fail to start. Below is pod status and logs i get from the weave-net pod from failing pod
Logs from failing weave-net-gps28
checkpoint-api.weave.works is not resolvable.
there is similar issue on another worker node as well
How to reproduce it?
on fresh install of k8 (1controlplane master and 2 workernodes, use weave deployment from [official site](kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml) )
Anything else we need to know?
Physical machines
Versions:
Logs:
The text was updated successfully, but these errors were encountered: