You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I installed Harvester 1.2.1 and connected Rancher 2.8.0 and then created RKE2 cluster with Cilium CNI.
I created IP pool with a few available IP addresses and Virtual Machine load balancer is working fine. Then I deployed a service which has LoadBalancer service and it got stuck with the next message "Service is ready:Load balancer is being provisioned"
On Harvester side I see LB for this service is created and active but it does not show which IP address is used (but in IP pool I see one more IP address was occupied)
The text was updated successfully, but these errors were encountered:
I don't believe this is the correct place for your issue. Please reopen this issue against harvester, as you are using the harvester load-balancer controller.
I had a similar problem, and switching from Cilium to Calico CNI didn't help.
Rancher provisioned RKE2 with 3 × "etcd + control" + 2 × worker nodes and the kube-system/kube-vip daemonset was not running anywhere. (I think because of a NoExecute-taint on nodes with etcd.) Reconfiguring the cluster as 3 × etc + 2 × control + 2 × worker solved my issue.
The issue that kube-vip DeamonSet did have toleration to run on etcd nodes only control plane. This comment solves the problem harvester/harvester#4891 (comment)
I installed Harvester 1.2.1 and connected Rancher 2.8.0 and then created RKE2 cluster with Cilium CNI.
I created IP pool with a few available IP addresses and Virtual Machine load balancer is working fine. Then I deployed a service which has LoadBalancer service and it got stuck with the next message "Service is ready:Load balancer is being provisioned"
On Harvester side I see LB for this service is created and active but it does not show which IP address is used (but in IP pool I see one more IP address was occupied)
The text was updated successfully, but these errors were encountered: