-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Replies: 1 comment · 3 replies
-
Can you show more information on what the loop looks like? I see that there are some pods getting recreated, but that is generally expected when the backing service changes - especially since you are using At this point I'm not seeing anything that's not expected behavior, given how you've configured the service. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Beta Was this translation helpful? Give feedback.
All reactions
-
heres a video demo: heres a watch for kubectl get nodes in a load balancer service with 4 ports and 5 nodes:
as i said earlier this happens for a few hours, every time there is some change, either a node down or a change in the load balancer service |
Beta Was this translation helpful? Give feedback.
All reactions
-
You're not really providing enough information to determine why this is occurring. Is the daemonset getting updated, or is something else terminating the pods and causing the daemonset controller to recreate them? Are the node labels flapping? Are the service endpoints changing? I would probably look at the events for one of the terminated pods to see why it was killed, as well as monitor the endpoints for your service with |
Beta Was this translation helpful? Give feedback.
All reactions
This discussion was converted from issue #8314 on September 07, 2023 20:55.
-
Environmental Info:
K3s Version: 1.24.16 (this also happened in previous patch versions)
Node(s) CPU architecture, OS, and Version: 5 vps, ubuntu 22 servers
Cluster Configuration: 1 master 4 workers
Describe the bug:
so when i something happens to one a node and the load balancer needs to "rebalance" or there is new service load balancer created, klipper goes into a loop of creating and destroying containers, see picture.
anyone can give me an idea on the cause of this situation, basically i have traefik as a daemonset setted up in a 5 node machine,
nodes are hosted in cloud and are all using an external ip address
this is my traefik load balancer service:
(ip and ports are "censored")
klipper daemon set that gets created:
pictures:
extra notes:
all my current nodes have the svccontroller.k3s.cattle.io/enablelb=true
i have all the nodes with svccontroller.k3s.cattle.io/lbpool=pool1
and one single node with svccontroller.k3s.cattle.io/lbpool=pool2
the load balancer config is based of several attempts and threads found in this repository so my services get to see the clients addresses
eventually after a very long time the "loop" stops and it all the klipper pods get ok, but this is taking a very long time...
Beta Was this translation helpful? Give feedback.
All reactions