Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added dependency upgrades for python3.12 support #1951

Open
wants to merge 28 commits into
base: master
Choose a base branch
from

Conversation

shivakrishnaah
Copy link

No description provided.

@CLAassistant
Copy link

CLAassistant commented Nov 9, 2024

CLA assistant check
All committers have signed the CLA.

@shivakrishnaah
Copy link
Author

shivakrishnaah commented Nov 9, 2024

Pull Request

fixes #1926

Description

Upgraded most of the dependencies to support python3.12 version

Changes Made

Upgraded the dependencies and the python docker images

Checklist

  • Code follows the project's style guidelines
  • All tests related to the changes pass successfully
  • Documentation is updated (if necessary)
  • Code is reviewed by at least one other team member
  • Any breaking changes are communicated and documented

@shivakrishnaah shivakrishnaah changed the title #1926 added dependency upgrades for python3.12 support Added dependency upgrades for python3.12 support Nov 9, 2024
Copy link
Member

@sakoush sakoush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shivakrishnaah many thanks for your great contribution to add support for python 3.12.

To keep the changes for this PR as minimal as possible, lets just add support for python 3.12 and keep the default version of mlserver at python 3.10 for now. We cannot upgrade everything to python 3.12 at the moment as this will break other workflows that we have.

So lets revert the changes to the CI / dockerfile.

I have triggered the CI anyway so that we can look at the tests status.

.github/workflows/tests.yml Outdated Show resolved Hide resolved
.github/workflows/benchmark.yml Outdated Show resolved Hide resolved
.github/workflows/benchmark.yml Outdated Show resolved Hide resolved
.github/workflows/licenses.yml Outdated Show resolved Hide resolved
.github/workflows/licenses.yml Outdated Show resolved Hide resolved
runtimes/alibi-explain/pyproject.toml Outdated Show resolved Hide resolved
tests/test_env.py Outdated Show resolved Hide resolved
tests/testdata/environment.yml Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
@sakoush
Copy link
Member

sakoush commented Nov 18, 2024

@shivakrishnaah Many thanks for your recent changes. There are some issues with the CI (check here as an example):

pyproject.toml changed significantly since poetry.lock was last generated. Run poetry lock [--no-update] to fix the lock file.

The first thing that needs to be fixed perhaps on your PR is to run make lock at the top level to refresh poetry lock files for mlserver.

There are a few things to note:

  • Torch min version required for py 3.12 support is 2.4, check here. This needs to be updated.
  • Some tests are failing in my local env. They will need to be addressed.
  • Some internal libraries (e.g. alibi and alibi-detect pin their TF dependencies to be <2.15 and therefore we will also have to upgrade these internal libraries before accepting this change (we will upgrade these libraries ourselves). Perhaps then do not look at changes / tests for these runtimes for now: mlserver-alibi-detect and mlserver-alibi-explain.

note: I did push a few commits for reference, I hope this is fine with you.

@shivakrishnaah
Copy link
Author

@sakoush Thank you for taking the time to review my PR. I have pulled the changes from your branch and merged them with mine. Please let me know if there's anything else I need to resolve or review.

@sakoush
Copy link
Member

sakoush commented Nov 18, 2024

@shivakrishnaah please have a look at the CI as there are numerous tests failing and they will need to be resolved as part of your change (exclude as discussed for now alibi-explain and alibi-detect tests). Please let me know if you have any questions.

@shivakrishnaah
Copy link
Author

@sakoush Fixed the test case failures. Now the test case failures are because of alibi_detect and alibi which has tensorflow dependency <2.15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants