Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up tests #293

Merged
merged 26 commits into from
Jan 21, 2025
Merged

Speed up tests #293

merged 26 commits into from
Jan 21, 2025

Conversation

jarlsondre
Copy link
Collaborator

@jarlsondre jarlsondre commented Jan 16, 2025

Summary

Since the tests are quite slow, it would be helpful to have them run in parallel. This PR does this by using pytest-xdist. Additionally, it removes the build for Horovod, removes the CI tests for TensorFlow, and install the dependencies with uv instead of pip, speeding up the setup time for tests significantly.

Old time for tests to complete: 39 minutes
New time for tests to complete: 6 minutes

Related issue(s) : #294

@jarlsondre jarlsondre self-assigned this Jan 16, 2025
@jarlsondre jarlsondre changed the title Test user installation [DRAFT] Test user installation Jan 16, 2025
@jarlsondre jarlsondre changed the title [DRAFT] Test user installation [DRAFT] Run tests in parallel Jan 17, 2025
@jarlsondre jarlsondre changed the title [DRAFT] Run tests in parallel Run tests in parallel Jan 17, 2025
@jarlsondre jarlsondre marked this pull request as ready for review January 17, 2025 15:54
@jarlsondre jarlsondre added the enhancement New feature or request label Jan 17, 2025
@jarlsondre jarlsondre linked an issue Jan 17, 2025 that may be closed by this pull request
Copy link
Collaborator

@annaelisalappe annaelisalappe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me it looks good :) Very cool idea

.github/workflows/pytest-loggers.yml Outdated Show resolved Hide resolved
@matbun
Copy link
Collaborator

matbun commented Jan 17, 2025

isn't there a better way to parallelize the tests? It seems a bit an overhead to maintain a bunch of similar files

@jarlsondre
Copy link
Collaborator Author

jarlsondre commented Jan 20, 2025

isn't there a better way to parallelize the tests? It seems a bit an overhead to maintain a bunch of similar files

@matbun

Currently the tests are being parallelized in two ways:

  1. By splitting them up so that each folder is run separately
  2. By using pytest-xdist which distributes the tests across multiple CPUs.

The latter way is the simplest, but for some reason one of the logger tests fails if you run them in parallel but not sequentially. This suggests that there could be some dependencies between the tests, which, if true, would be a sign that they probably should be changed a bit.

Right now, the only tests that are parallelized are the use-case tests. The other ones are so fast that there is no need.

In terms of the overhead, the only thing I can think of is that if you add a new folder, you also have to add a script for the folder. There is no change needed between the scripts, other than updating the folder name.

@jarlsondre jarlsondre changed the title Run tests in parallel Speed up tests Jan 20, 2025
tests/loggers/conftest.py Outdated Show resolved Hide resolved
.github/workflows/pytest.yml Outdated Show resolved Hide resolved
@jarlsondre jarlsondre requested a review from matbun January 20, 2025 16:09
@jarlsondre jarlsondre requested a review from matbun January 20, 2025 16:57
@jarlsondre jarlsondre requested a review from matbun January 21, 2025 09:05
@matbun
Copy link
Collaborator

matbun commented Jan 21, 2025

As the branch name suggests, I would include in this PR also the tests on the user installation of itwinai.

Also, it would be nice to add that as a prerequisite to the action that publishes new releases to PyPI. In other words, I think it would be nice to test user installation upon the creation of a new release, and prevent the creation of a new release if the user installation fails.

A similar concept could be applied to unit and integration tests + container CI/CD. In other words, do not release until test and containers are green

@jarlsondre jarlsondre closed this Jan 21, 2025
@jarlsondre jarlsondre reopened this Jan 21, 2025
@jarlsondre
Copy link
Collaborator Author

As the branch name suggests, I would include in this PR also the tests on the user installation of itwinai.

Also, it would be nice to add that as a prerequisite to the action that publishes new releases to PyPI. In other words, I think it would be nice to test user installation upon the creation of a new release, and prevent the creation of a new release if the user installation fails.

A similar concept could be applied to unit and integration tests + container CI/CD. In other words, do not release until test and containers are green

@matbun
The name of the branch is misleading, I was supposed to change it but forgot.

Regarding testing the user installation: When I was looking into this earlier, I came to the conclusion that what we are looking for is essentially already implemented. The generic_torch.sh script already does everything in the user installation, the only difference being that it uses the -e flag. Additionally, we have an MNIST use case. I started creating this test but saw that everything that I had planned on doing was already done with the use case tests, so I don't see what more there is to add. This would have been relevant with the old method of installing itwinai, but I don't think it is with the new one.

Regarding the release: Since we are installing itwinai with pip install ., we are already checking the results of the newest release.

@jarlsondre jarlsondre merged commit 9192008 into main Jan 21, 2025
12 checks passed
@jarlsondre jarlsondre deleted the test-user-installation branch January 21, 2025 15:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Slow tests—Make them run in parallel
3 participants