You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently bought a Jetson Orin NX 16GB with Jetpack 5 (Ubuntu 20.04) with cuda 11.4 installed.
Because of a ROS project and it's requirements, I need to use the Humble version of it, which requires Ubuntu 22.04, and access to the GPUs from a container with pytorch.
I decided to use the image dustynv/ros:humble-desktop-l4t-r36.2.0 (ubuntu 22.04), with it's cuda 12.2, and install pytorch in it.
=> The actual problem is that I don't have access to the GPUs with this setup inside the container (in the interpreter python, the torch.cuda.is_available() returns False). My question is why? Is there a simple explanation to that.
The problem seems not to come from the different cuda versions, because using the image dustynv/ros:humble-desktop-pytorch-l4t-r35.4.1, with a release version 35 (ubuntu 20.04) and cuda 12.2, I have access to the GPUs of my host machine. The problem seems to come from the release version then.
I thought of several solutions, but some are more interesting for me.
flash my jetson to have jetpack 6 with l4t release 36, cuda 12 and this should work then with my container running l4t release 36 (might be difficult, cuda could be installed wrongly...)
downgrade the cuda version to 11.4 in my container (but will almost surely not work because of the runtime configuration of the container initially)
build my own container and try to figure it out this all thing (hard af I suppose)
If someone has a solution to my problem, i would be very grateful. We have a solution actually by using foxy version of ROS in a release 35 container, but it creates problems somewhere else, and it's not practical for me.
Thanks!
PS: already looked at the post #258 but it didn't help me because it's not exactly the same problem.
The text was updated successfully, but these errors were encountered:
Dear Community,
I recently bought a Jetson Orin NX 16GB with Jetpack 5 (Ubuntu 20.04) with cuda 11.4 installed.
Because of a ROS project and it's requirements, I need to use the Humble version of it, which requires Ubuntu 22.04, and access to the GPUs from a container with pytorch.
I decided to use the image
dustynv/ros:humble-desktop-l4t-r36.2.0
(ubuntu 22.04), with it's cuda 12.2, and install pytorch in it.=> The actual problem is that I don't have access to the GPUs with this setup inside the container (in the interpreter python, the
torch.cuda.is_available()
returnsFalse
). My question is why? Is there a simple explanation to that.The problem seems not to come from the different cuda versions, because using the image
dustynv/ros:humble-desktop-pytorch-l4t-r35.4.1
, with a release version 35 (ubuntu 20.04) and cuda 12.2, I have access to the GPUs of my host machine. The problem seems to come from the release version then.I thought of several solutions, but some are more interesting for me.
flash my jetson to have jetpack 6 with l4t release 36, cuda 12 and this should work then with my container running l4t release 36 (might be difficult, cuda could be installed wrongly...)
downgrade the cuda version to 11.4 in my container (but will almost surely not work because of the runtime configuration of the container initially)
build my own container and try to figure it out this all thing (hard af I suppose)
If someone has a solution to my problem, i would be very grateful. We have a solution actually by using foxy version of ROS in a release 35 container, but it creates problems somewhere else, and it's not practical for me.
Thanks!
PS: already looked at the post #258 but it didn't help me because it's not exactly the same problem.
The text was updated successfully, but these errors were encountered: