Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase timeouts on CI #3726

Merged
merged 1 commit into from
Dec 4, 2024
Merged

Conversation

apostasie
Copy link
Contributor

@apostasie apostasie commented Dec 4, 2024

Either GH is busier or they downgraded the base instances we are using.

One way or the other:

  • windows unit test now hit the 5 minutes mark
  • building dependencies on non arm machine hit over 10 minutes mark
  • ipv6 tests are hitting the 10 minutes mark

@AkihiroSuda @djdongjin at your convenience.

Signed-off-by: apostasie <[email protected]>
Copy link
Member

@djdongjin djdongjin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks

@djdongjin djdongjin added this to the v2.0.2 milestone Dec 4, 2024
@apostasie
Copy link
Contributor Author

Thanks @djdongjin

Just increased ipv6 test as well.

Not sure if GH slowness is transient or not, but either way we need a bit of leeway and I did set these timeouts too aggressive in the first place.

@apostasie
Copy link
Contributor Author

Ok... something is very wrong with GH right now...

https://github.com/containerd/nerdctl/actions/runs/12150865305

@apostasie
Copy link
Contributor Author

@djdongjin there definitely was some transient issue with github causing a bunch of timeouts.

I still think relaxing the timeouts a bit on these is warranted and will not hurt anyhow (windows normally clocks in at 4min+, quite close to the current 5 mins limit), so, let's merge if you are ok with that.

@djdongjin djdongjin merged commit a5ab79c into containerd:main Dec 4, 2024
30 checks passed
@fahedouch
Copy link
Member

is the execution engine (e.g. Docker) that runs these jobs using the maximum CPU/memory available on the machine ? we will reach a limit with the timeout :/

@apostasie
Copy link
Contributor Author

is the execution engine (e.g. Docker) that runs these jobs using the maximum CPU/memory available on the machine ? we will reach a limit with the timeout :/

Not sure. I will have a look around.

About that, I have mixed feelings about the fact that we are running tests inside a docker container overall:

  • this does introduce a layer of complexity (and possible additional bugs - especially on networking, but maybe also snapshotter?)
  • this also alters the type of situations we normally see on a host, by restricting us to this "simplified" docker env

I know we do that for a reason (provide environment consistency and reproducibility), but it comes at a cost ^.
Maybe we could have canary run on the host instead of inside the container (would obviously require a lot of work to set the env though, so...).

@apostasie
Copy link
Contributor Author

@fahedouch

https://github.com/containerd/nerdctl/pull/3726/checks#step:3:87

docker info reports 4 CPU 16GB.

Is there somewhere else dockerd could constrain containers resources that would not show in docker info?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants