-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate endpoints prevents any endpoint from being used #5577
Milestone
Comments
brandond
changed the title
Duplicate endpoints prevents any enpoint from being used
Duplicate endpoints prevents any endpoint from being used
Mar 7, 2024
Tracked in k3s as k3s-io/k3s#9693 |
This was referenced Mar 12, 2024
Validated on master branch with commit 109f70bEnvironment DetailsInfrastructure
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
or
Config.yaml:
Additional files registries.yaml
Testing Steps
Check containerd logs for errors:
Replication Results:
Validation Results:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Environmental Info:
RKE2 Version:
rke2 version v1.27.10+rke2r1 (915672b)
go version go1.20.13 X:boringcrypto
Node(s) CPU architecture, OS, and Version:
Linux node01 4.18.0-477.10.1.el8_8.x86_64 #1 SMP Wed Apr 5 13:35:01 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.8 (Ootpa)
Cluster Configuration:
9 Cluster Members - 3 Server 6 Worker
Describe the bug:
Including a duplicate private repo causes an error and no private mirror is used, only the fallback endpoint.
Steps To Reproduce:
/etc/rancher/rke2/registries.yaml
ImagePullBackOff
Check logs and there is an error about duplicate endpoints
*level=error msg="failed to decode hosts.toml" error="failed to parse TOML: (24, 2): duplicated tables"*
Expected behavior:
Try the duplicate endpoint or skip over it
Actual behavior:
Only the default endpoint fallback is processed, none of the private registries are tried resulting in
ImagePullBackOff
for all workloadsWe have left a cluster in this state for three hours. After that we deleted the pods many times and received the exact same result
Additional context / logs:
.....
The text was updated successfully, but these errors were encountered: