-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch2.0.1 Rocm5.5 support #31
Comments
should it work for gfx900? |
Hi guys, Good news is if PCIe atomic feature had been supported, pytoch 2.x can run properly on gfx9. So gfx900 is just well, you can use official released pytorch-2.x.
|
now I wonder if I use the gpu as passtrough on my xen server should I expect pcie atomic issues cause of virtualization layer? |
@brsh1 |
the reason I am asking is I have this error cant initialize nvml. when trying the version you suggested. |
just to make sure what rocm version should I be using? 5.6? |
The latest ROCm-5.6 is just fine. If you want to workaround PCIe atomic problem, my suggestion is rollback to pytorch-1.13.1. it can run SD properly. https://download.pytorch.org/whl/rocm5.2/torch-1.13.1%2Brocm5.2-cp310-cp310-linux_x86_64.whl |
should I be using: HSA_OVERRIDE_GFX_VERSION=10.3.0 for gfx900? torch.cuda.is_available reports true, but mnist example, stable diffusion both fail to run. stuck with 100% cpu until process killed by me. any ideas? |
@xuhuisheng many thanks for your work and description. It helped me a lot to use RX 580. With gfx803 and rocm 5.6 I got the segmentation error in web-ui, which seems to show that torch ( v2.0.1-rc2) /vision (v0.15.2-rc2) and rocm (5.6) version does not work together. 5.5.0 worked like a charm. |
您好,使用了您构建的PyTorch 2.1.0a0版本,可是运行Stable-Diffusion-webui还需要与torch版本对应的torchaudio,请问应该如何选择适合该PyTorch的torchaudio版本? 我测试了torchaudio-2.1.0会提示不兼容,当我尝试强制修改PyTorch 2.1.0a0的版本号为2.1.0后虽然不再提示兼容问题,但 |
@SLi-Man I did also an update for newest pytorch: |
Hi
Will you also release this version?
The text was updated successfully, but these errors were encountered: