-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to pin pytorch with cuda. #2194
Comments
I also tried a more specific version for torch: $ micromamba create -f env.yml -vvvv |& grep pytorch
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libmamba Searching index cache file for repo 'https://conda.anaconda.org/pytorch/linux-64/repodata.json'
pytorch/linux-64 Using cache
info libmamba Searching index cache file for repo 'https://conda.anaconda.org/pytorch/noarch/repodata.json'
pytorch/noarch Using cache
info libmamba Reading cache files '/home/username/micromamba/pkgs/cache/ee0ed9e9.*' for repo index 'https://conda.anaconda.org/pytorch/linux-64'
info libmamba Reading cache files '/home/username/micromamba/pkgs/cache/edb1952f.*' for repo index 'https://conda.anaconda.org/pytorch/noarch'
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libsolv job: install pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0
info libsolv pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_1 (assertion)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_0 (assertion)
info libsolv installing pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 (assertion)
info libsolv propagate decision -2325: !pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv propagate decision -2324: !pytorch-cuda-11.6-h867d48c_0 [2324] Conflict.level1
info libsolv !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1) Install.level1
info libsolv pytorch-cuda-11.6-h867d48c_0 [2324] (w2) Conflict.level1
info libsolv pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1) Install.level1
info libsolv !pytorch-cuda-11.6-h867d48c_0 [2324] (w1) Conflict.level1
info libsolv !pytorch-cuda-11.6-h867d48c_1 [2325] (w1) Conflict.level1
info libsolv conflicting pytorch-cuda-11.6-h867d48c_1 (assertion)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_0 (assertion)
info libsolv propagate decision -2325: !pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv propagate decision -2324: !pytorch-cuda-11.6-h867d48c_0 [2324] Conflict.level1
info libsolv !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1)
info libsolv pytorch-cuda-11.6-h867d48c_0 [2324] (w2) Conflict.level1
info libsolv pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv -> decided to conflict pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0
info libsolv propagate decision -1078: !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] Conflict.level1
- nothing provides cuda 11.6.* needed by pytorch-cuda-11.6-h867d48c_0 |
Great report! Want to try the new error messages? See #2078 |
The relevant bit that is new seems to be: =================================== Experimental messages (new) ====================================
critical libmamba Invalid dependency info: <NULL> Is this saying that |
does installing |
note that the |
I think you need the |
Interesting. That does solve the issue: CONDA_OVERRIDE_CUDA=11.6 micromamba create -f env.yml
__
__ ______ ___ ____ _____ ___ / /_ ____ _
/ / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
/ /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
/ .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
/_/
pytorch/noarch No change
nvidia/noarch 2.6kB @ 7.5kB/s 0.3s
pytorch/linux-64 No change
nvidia/linux-64 96.3kB @ 239.4kB/s 0.4s
conda-forge/noarch 10.8MB @ 3.7MB/s 2.6s
conda-forge/linux-64 28.8MB @ 4.3MB/s 6.9s
Transaction
Prefix: /home/username/micromamba/envs/testenv
Updating specs:
- python[version='>=3.8,<3.9']
- pytorch::pytorch==1.13.0[build=py3.8_cuda11.6*]
- cudatoolkit=11.6
Package Version Build Channel Size
────────────────────────────────────────────────────────────────────────────────────────────────────────
Install:
────────────────────────────────────────────────────────────────────────────────────────────────────────
+ _libgcc_mutex 0.1 conda_forge conda-forge/linux-64 Cached
+ _openmp_mutex 4.5 2_kmp_llvm conda-forge/linux-64 6kB
+ blas 2.116 mkl conda-forge/linux-64 13kB
+ blas-devel 3.9.0 16_linux64_mkl conda-forge/linux-64 13kB
+ bzip2 1.0.8 h7f98852_4 conda-forge/linux-64 Cached
+ ca-certificates 2022.12.7 ha878542_0 conda-forge/linux-64 Cached
+ cuda 11.6.1 0 nvidia/linux-64 1kB
+ cuda-cccl 11.6.55 hf6102b2_0 nvidia/linux-64 1MB
+ cuda-command-line-tools 11.6.2 0 nvidia/linux-64 1kB
+ cuda-compiler 11.6.2 0 nvidia/linux-64 1kB
+ cuda-cudart 11.6.55 he381448_0 nvidia/linux-64 198kB
+ cuda-cudart-dev 11.6.55 h42ad0f4_0 nvidia/linux-64 1MB
+ cuda-cuobjdump 11.6.124 h2eeebcb_0 nvidia/linux-64 138kB
+ cuda-cupti 11.6.124 h86345e5_0 nvidia/linux-64 23MB
+ cuda-cuxxfilt 11.6.124 hecbf4f6_0 nvidia/linux-64 290kB
+ cuda-driver-dev 11.6.55 0 nvidia/linux-64 17kB
+ cuda-gdb 12.0.90 0 nvidia/linux-64 6MB
+ cuda-libraries 11.6.1 0 nvidia/linux-64 2kB
+ cuda-libraries-dev 11.6.1 0 nvidia/linux-64 2kB
+ cuda-memcheck 11.8.86 0 nvidia/linux-64 172kB
+ cuda-nsight 12.0.78 0 nvidia/linux-64 119MB
+ cuda-nsight-compute 12.0.0 0 nvidia/linux-64 1kB
+ cuda-nvcc 11.6.124 hbba6d2d_0 nvidia/linux-64 44MB
+ cuda-nvdisasm 12.0.76 0 nvidia/linux-64 50MB
+ cuda-nvml-dev 11.6.55 haa9ef22_0 nvidia/linux-64 67kB
+ cuda-nvprof 12.0.90 0 nvidia/linux-64 5MB
+ cuda-nvprune 11.6.124 he22ec0a_0 nvidia/linux-64 66kB
+ cuda-nvrtc 11.6.124 h020bade_0 nvidia/linux-64 18MB
+ cuda-nvrtc-dev 11.6.124 h249d397_0 nvidia/linux-64 18MB
+ cuda-nvtx 11.6.124 h0630a44_0 nvidia/linux-64 59kB
+ cuda-nvvp 12.0.90 0 nvidia/linux-64 120MB
+ cuda-runtime 11.6.1 0 nvidia/linux-64 1kB
+ cuda-samples 11.6.101 h8efea70_0 nvidia/linux-64 5kB
+ cuda-sanitizer-api 12.0.90 0 nvidia/linux-64 17MB
+ cuda-toolkit 11.6.1 0 nvidia/linux-64 1kB
+ cuda-tools 11.6.1 0 nvidia/linux-64 1kB
+ cuda-visual-tools 11.6.1 0 nvidia/linux-64 1kB
+ cudatoolkit 11.6.0 habf752d_9 nvidia/linux-64 861MB
+ gds-tools 1.5.0.59 0 nvidia/linux-64 43MB
+ icu 70.1 h27087fc_0 conda-forge/linux-64 14MB
+ ld_impl_linux-64 2.39 hcc3a1bd_1 conda-forge/linux-64 Cached
+ libblas 3.9.0 16_linux64_mkl conda-forge/linux-64 13kB
+ libcblas 3.9.0 16_linux64_mkl conda-forge/linux-64 13kB
+ libcublas 11.9.2.110 h5e84587_0 nvidia/linux-64 315MB
+ libcublas-dev 11.9.2.110 h5c901ab_0 nvidia/linux-64 326MB
+ libcufft 10.7.1.112 hf425ae0_0 nvidia/linux-64 98MB
+ libcufft-dev 10.7.1.112 ha5ce4c0_0 nvidia/linux-64 207MB
+ libcufile 1.5.0.59 0 nvidia/linux-64 772kB
+ libcufile-dev 1.5.0.59 0 nvidia/linux-64 13kB
+ libcurand 10.3.1.50 0 nvidia/linux-64 54MB
+ libcurand-dev 10.3.1.50 0 nvidia/linux-64 460kB
+ libcusolver 11.3.4.124 h33c3c4e_0 nvidia/linux-64 91MB
+ libcusparse 11.7.2.124 h7538f96_0 nvidia/linux-64 169MB
+ libcusparse-dev 11.7.2.124 hbbe9722_0 nvidia/linux-64 345MB
+ libffi 3.4.2 h7f98852_5 conda-forge/linux-64 Cached
+ libgcc-ng 12.2.0 h65d4601_19 conda-forge/linux-64 Cached
+ libgfortran-ng 12.2.0 h69a702a_19 conda-forge/linux-64 23kB
+ libgfortran5 12.2.0 h337968e_19 conda-forge/linux-64 2MB
+ libhwloc 2.8.0 h32351e8_1 conda-forge/linux-64 3MB
+ libiconv 1.17 h166bdaf_0 conda-forge/linux-64 1MB
+ liblapack 3.9.0 16_linux64_mkl conda-forge/linux-64 13kB
+ liblapacke 3.9.0 16_linux64_mkl conda-forge/linux-64 13kB
+ libnpp 11.6.3.124 hd2722f0_0 nvidia/linux-64 124MB
+ libnpp-dev 11.6.3.124 h3c42840_0 nvidia/linux-64 121MB
+ libnsl 2.0.0 h7f98852_0 conda-forge/linux-64 Cached
+ libnvjpeg 11.6.2.124 hd473ad6_0 nvidia/linux-64 2MB
+ libnvjpeg-dev 11.6.2.124 hb5906b9_0 nvidia/linux-64 2MB
+ libsqlite 3.40.0 h753d276_0 conda-forge/linux-64 Cached
+ libstdcxx-ng 12.2.0 h46fd767_19 conda-forge/linux-64 Cached
+ libuuid 2.32.1 h7f98852_1000 conda-forge/linux-64 Cached
+ libxml2 2.10.3 h7463322_0 conda-forge/linux-64 773kB
+ libzlib 1.2.13 h166bdaf_4 conda-forge/linux-64 Cached
+ llvm-openmp 15.0.6 he0ac6c6_0 conda-forge/linux-64 3MB
+ mkl 2022.1.0 h84fe81f_915 conda-forge/linux-64 209MB
+ mkl-devel 2022.1.0 ha770c72_916 conda-forge/linux-64 26kB
+ mkl-include 2022.1.0 h84fe81f_915 conda-forge/linux-64 763kB
+ ncurses 6.3 h27087fc_1 conda-forge/linux-64 Cached
+ nsight-compute 2022.4.0.15 0 nvidia/linux-64 801MB
+ openssl 3.0.7 h0b41bf4_1 conda-forge/linux-64 Cached
+ pip 22.3.1 pyhd8ed1ab_0 conda-forge/noarch Cached
+ python 3.8.15 h4a9ceb5_0_cpython conda-forge/linux-64 Cached
+ pytorch 1.13.0 py3.8_cuda11.6_cudnn8.3.2_0 pytorch/linux-64 1GB
+ pytorch-cuda 11.6 h867d48c_1 pytorch/noarch 3kB
+ pytorch-mutex 1.0 cuda pytorch/noarch 3kB
+ readline 8.1.2 h0f457ee_0 conda-forge/linux-64 Cached
+ setuptools 65.6.3 pyhd8ed1ab_0 conda-forge/noarch Cached
+ tbb 2021.7.0 h924138e_1 conda-forge/linux-64 2MB
+ tk 8.6.12 h27826a3_0 conda-forge/linux-64 Cached
+ typing_extensions 4.4.0 pyha770c72_0 conda-forge/noarch 30kB
+ wheel 0.38.4 pyhd8ed1ab_0 conda-forge/noarch Cached
+ xz 5.2.6 h166bdaf_0 conda-forge/linux-64 Cached
Summary:
Install: 91 packages
Total download: 6GB
────────────────────────────────────────────────────────────────────────────────────────────────────────
Confirm changes: [Y/n] However, note that when I install with So it seems like there's a discrepancy here that still needs an explanation. Any thoughts? |
Not really, if the |
I can say confidently that my previous environments did not have the cuda package explicitly installed. So maybe it's just a requirements change. |
Troubleshooting docs
Search tried in issue tracker
pytorch cuda
Latest version of Mamba
Tried in Conda?
Reproducible with Conda using experimental solver.
Describe your issue
I am having issues specifying a version of pytorch (1.13) while also ensuring that I get the cuda version. See the pasted info in the other forms. The essential error is:
when specifying
pytorch::pytorch=1.13.*=*cuda*
.How can I pin pytorch to 1.13 and force cuda?
The packages look to be available:
When I do try a similar install using
conda
ala:CONDA_OVERRIDE_CUDA=11.6 conda env create -f env.yml --experimental-solver=libmamba
then I get a similar error. Note that I am building on a host that does not have a GPU...this is for later installation from lock file on a host that does have a GPU.
When I do not pin the version and use
pytorch::pytorch=*=*cuda*
instead, then there is a successful install, but it obviously isn't guaranteed to give the package version that I want. In this case, it gives 1.12.1.mamba info / micromamba info
Logs
environment.yml
~/.condarc
None
The text was updated successfully, but these errors were encountered: