Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved build logs when errors occur during container startup #398

Open
rocketeerbkw opened this issue Nov 25, 2024 · 1 comment
Open

Comments

@rocketeerbkw
Copy link
Member

One of the steps in a deployment that can have failures is Applying Deployments. This is the part where k8s is attempting to rollout the new container images.

Currently, if a rollout fails, the build logs might not be totally clear. We just output the "raw" k8s conditions:

nginx-476fb88445-f9cw7  Running Ready   containers with unready status: [nginx]
nginx-476fb88445-f9cw7  Running ContainersReady containers with unready status: [nginx]

But there are multiple reasons that a container can be "unready:" Applications failures (pods crashing), application delays (pods fail to start in time), miconfiguration (pods listening on wrong port).

I think we can do a better job of looking at these error cases and providing more useful errors to users in the build logs. We can translate the raw k8s messages into steps users can take in Lagoon to fix deployment issues.

@shreddedbacon
Copy link
Member

shreddedbacon commented Nov 25, 2024

Yeah, this would be fairly simple to do. Extend the failure message to run a kubectl events --for pod/${POD} -o json and scan the events for failure messages like readiness or liveness probe failures, or exit codes etc. Then we could have some messages defined that could be shown depending on which events are returned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants