Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution #17

Open
ww-9 opened this issue Sep 11, 2023 · 5 comments
Open

Comments

@ww-9
Copy link

ww-9 commented Sep 11, 2023

I'm running on Paperspace on RTX 5000, SDXL-LoRA-PPS.ipynb notebook and get the following error on the Train LoRA step:
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

@TheLastBen
Copy link
Owner

try a different GPU

@ww-9
Copy link
Author

ww-9 commented Sep 11, 2023

Thanks for the quick response! It worked with P6000 and 24 GiB VRAM. Save_VRAM option is enabled, and there in the comment it says that 10GB VRAM should be enough for LoRA_Dim = 64, while on 16GB P5000 also "OutOfMemoryError: CUDA out of memory". Is there any other settings I can tweak to fit 16GB vram?

@TheLastBen
Copy link
Owner

Enabling Save_VRAM should fit it in a 10GB, I'll check it out

@kruttik-lab49
Copy link

I am running into the same issue. Any help much appreciated.

@TheLastBen
Copy link
Owner

it should be working now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants