-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RKE2 with Cilium and Hetzner CNI - Can't access container logs via. Rancher UI #5163
Comments
|
Yes |
rke2-agent requests a certificate for kubelet and in that process, it passes the external and the internalIP. Then it receives a certificate which is used by Kubelet. It seems that in your case, the InternalIP was not passed at all. Could you check in the Could you get me the output of |
I've seen this issue in the past, where the hetzner cloud provider sets node IPs that the kubelet was not aware of, so that the kubelet cert's IPs don't match the node IPs. As @manuelbuil noted, you need to tell k3s what IPs to use so that the cert has the correct attributes. |
|
I think @brandond has a point. In any document explaining how to deploy Kubernetes on Hetzner cloud, they add an extra flag to include the internalIP in the certificate SAN, e.g. https://community.hetzner.com/tutorials/install-kubernetes-cluster#step-33---setup-control-plane. The hetzner cloud provider must be acting a bit differently compared to other cloud providers. Could you deploy rke2 using the |
I'm already using |
The --tls-san flag only controls the certs used by the supervisor and apiserver. What you're looking at here is the kubelet cert.
metadata:
annotations:
alpha.kubernetes.io/provided-node-ip: 49.12.216.49
status:
addresses:
- address: 49.12.216.49
type: ExternalIP
- address: ec1-fsn1-cax-21-worker-51522148-4d5fl
type: Hostname
- address: 10.0.0.5
type: InternalIP The node info shows that the kubelet has detected only the node's public IP, but the hetzner cloud provider has set the private IP as the internal IP, and the public as the public. You should set the node's --node-ip and --node-external-ip to match what hetzner will use, so that the kubelet serving cert has the correct IPs in the SAN list. |
Oh okay, and how can i do that? Shouldn’t the Hetzner Cloud Controller do that for me? Or is this caused by Rancher provisioning the nodes? |
You haven't configured the node IPs at all. RKE2 and the kubelet are detecting |
And how can i change that? Can this be configured to be set automatically, so after provisioning by rancher it's ready to go? |
You could try configuring the apiserver kubelet-preferred-address-types arg to prefer the external IP over the internal IP. Anything else around changing the address order set on the node would need to be done on the hetzner cloud provider side. |
The default for |
RKE2 uses a different default order than kubeadm. I would probably recommend externalip, hostname since the others may not work well with what hetzner is setting up for you. |
I've set it to |
Closing :) |
I just noticed that some metrics are not scraped, the error message in the rke2-metrics-server is:
Can i set |
Yes, you should be able to do that via helm chart config. |
Where exactly can i change the helm chart values of the metrics server in the rancher ui and which value would i need to set? I can't find it when editing the cluster. |
Via Rancher UI, you would can use the User Addon section of the cluster management UI to deploy a HelmChartConfig for the rke2-metrics-server chart. Ref: |
Added this to "Additional Manifest" in the Add-On Config, and its working now:
|
Environmental Info:
RKE2 Version:
v1.27.8+rke2r1
Rancher Version:
v2.8.0
Node(s) CPU architecture, OS, and Version:
ARM, Debian 12
Cluster Configuration:
2 worker and 1 master
Describe the bug:
Can't view logs of pods over UI
socket.js:106 WebSocket connection to 'wss://****.com/k8s/clusters/c-m-tfdwn25h/api/v1/namespaces/kube-system/pods/rke2-ingress-nginx-controller-dzsgb/log?previous=false&follow=true×tamps=true&pretty=true&container=rke2-ingress-nginx-controller&sockId=8' failed
Response:
Steps To Reproduce:
Create new RKE2 Cluster in Rancher UI with Cilium, External Cloud Provider for Hetzner Cloud: https://github.com/hetznercloud/hcloud-cloud-controller-manager, run any container image in a pod or deployment and try to view the container logs via. the UI
The text was updated successfully, but these errors were encountered: