You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, Lagoon will happily build a 5GiB docker image, push it to Harbor, and then when the pods schedule pull the image down the nodes. This can be problematic for
Pod scheduling (as it slows down k8s operations, waiting on image pulls)
Harbor - as getting huge images pushed and pulled from it is slow, and ties up resources
Developers - as they need to wait for longer periods of time for the above to occur
It would be good if we could use the new deployment warning system to allow:
Warnings - when the size of any single image is > X GiB. Where X is defined on a per cluster basis.
Failures - when the size of any single image is > Y GiB. Where Y is defined on a per cluster basis.
These could default to being unlimited (as in the current status quo) or extremely relaxed by default. Best practices for docker images could be developed as well, or you could link to something like https://www.pixelite.co.nz/article/dockerfile-best-practices/ ;)
The text was updated successfully, but these errors were encountered:
At present, Lagoon will happily build a 5GiB docker image, push it to Harbor, and then when the pods schedule pull the image down the nodes. This can be problematic for
It would be good if we could use the new deployment warning system to allow:
These could default to being unlimited (as in the current status quo) or extremely relaxed by default. Best practices for docker images could be developed as well, or you could link to something like https://www.pixelite.co.nz/article/dockerfile-best-practices/ ;)
The text was updated successfully, but these errors were encountered: