-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflector not watching secrets after period of time #341
Comments
This is a known issue currently. For some reason the k8s master nodes stop sending updates for secrets and don't close the session, so it keeps it running. Please use a smaller timeout for now (~15 minutes) so it forces a watcher close |
Hello @winromulus We have the same issue on our GKE cluster. reflector is working perfectly and after 2 days, it stop syncing the secrets.
version: v7.0.193 |
@rayanebel same issue as above. For some reason the Secrets watcher stops receiving notifications from k8s. |
@winromulus Ok, I will try to use the timeout settings. Concerning our context, we are using reflector in several kubernetes clusters hosted in different provider (AWS, GCP and Scaleway) and currently reflector stop to work only with our GKE clusters. |
This issue seems to have been fixed with the latest version of the Kubernetes client. Please upgrade and reopen this issue if there's still a problem. |
@winromulus - Had this one bite us today, process silently hung without any notice and stopped replicating data. Caused Traefik to emit expired SSL certificates, causing clients to reject them. Is there any diagnostics we can do? The secrets would change maybe at best fortnightly, so I'm not sure hiking the timeouts that high is the right play. |
We're using this instead: https://github.com/mittwald/kubernetes-replicator |
@steve-gray can you provide more details about your environment (kube version, how is it hosted, etc)? Also which version of reflector are you using? |
We are experiencing this too. We use EKS, our version of reflector is 7.1.262, kubernetes version is 1.26. For the last month or two, we notice that once a week or once every 2 weeks the reflector stops reflecting configmaps. There are no error logs in the reflector pod -- it just stops noticing that it needs to reflect configmaps in new environments. Restarting the pod fixes the issue. |
@ivababukova can you provide more information about the flavor of kubernetes you're using? (k8s or k3s or something else) Also if you're self hosting or using a cloud provider? |
Hey there ! Nothing particular in the logs except the lack of SecretWatcher log:
The periodic update of all the watched secrets still seems to work and last run was 5 days ago (Successful). I'll try the watcher timeout trick, and will dig into the cod to check if I see anything weird. Versions:
|
Hello. We had the same problem the 7th of October 2024, Reflector stopped replicating secrets. Here is the end of the log :
After this time, there was no log at all anymore. Concerning the metrics, the pod reflector cpu was almost zero (seems normal because it wasn't doing anything anymore. Nothing specific about the memory usage just before the incident. Here are informations about version :
Is it possible to solve this problem please ? It makes reflector solution unstable unfortunately :(. |
Sorry if this has been raised before.
Running multiple small clusters, AKS 1.25.x
In 2 environments so far, the SecretWatcher seems to just stop watching. This causes us to find our letsencrypt certs start to expire in namespaces. ConfigMapWatcher seems to continue.
We're running kubernetes-reflector:6.1.9. We've been running this (awesome) microservice for months and no problems to be seen, then they both stopped working within 4 days of eachother.
Logs are as follow:
No faults in the logs, but the Core.SecretWatcher never comes back up. One was discovered today, so we're quite confident it's not going to autoheal.
We'll be updating to 7.x shortly, but we're raising this now in the hopes of some enlightenment :)
The text was updated successfully, but these errors were encountered: