-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda compatibility #20
Comments
Hello. Thank you for your interest in Liberate.FHE. The cuda version of our Liberate.FHE is related to the pytorch version. Currently, our package build system installs the latest version of pytorch, import torch
print(torch.__version__)
# '2.2.1+cu121' So the easiest way is to change cuda (nvcc) to the cuda version of torch installed. However, if you are reluctant to change the cuda version due to your other projects, there is another method I suggest. When you clone our repository, there is You can change it in the following way. [tool.poetry.dependencies]
python = ">=3.10,<3.13"
numpy = "^1.23.5"
mpmath = "^1.3.0"
scipy = "^1.10.1"
matplotlib = "^3.7.1"
joblib = "^1.2.0"
torch = "==2.2.1"
tqdm = "^4.66.1"
ninja = "^1.11.1.1" to
The link I changed as an example is cuda 11.5 version of pytorch 1.11 version and python 3.10. However, we confirmed that it works with cuda versions There is one more way. $ export CUDA_HOME=/usr/local/cuda-12.1
$ export PATH=$CUDA_HOME/bin:$PATH
$ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH And check $ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0 And build our project We build it manually now, but we have already registered it with pypi to make installation even simpler. Please wait a little longer. For now, that's all I can tell you. Thank you so much. |
Hi again, Thanks for your help, I can build it with the snippet: $ export CUDA_HOME=/usr/local/cuda-12.1
$ export PATH=$CUDA_HOME/bin:$PATH
$ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH and afterwards I can then
but when importing liberate import liberate I got the following error message
which seems to be a cuda related issue. Do you have a workaround for it ? Thanks |
I have those "undefined symbol" issues as well. Did you find a solution to this issue @tguerand? Edit: I try to downgrade pytorch into version 2.1.1 and remove all of the "undefined symbol" errors. Hope this works for others too. |
I am trying to install liberate-fhe and i am facing quite a few cuda-related issues, following https://docs.desilo.ai/liberate-fhe/getting-started/installation
Everything is fine until the step "Run CUDA compile script"
Is there a specific cuda version that is needed? My machines are either on cuda 10.1 or 11.5 and i am a bit reticent to upgrade the cuda version as it could mess up with some other projects
Furthermore, if the cuda compiler version is different than the runtime one (nvcc vs nvidia-smi) it also fails to setup the installation. Is there a fix for this or is it intended? I thought that if the runtime version is more recent than the cuda compiler one it should work (as with torch as an example, I have nvcc version 10.1 but torch uses cu118 which works totally fine)
I tried to:
Thanks in advance
The text was updated successfully, but these errors were encountered: