You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running on Paperspace on RTX 5000, SDXL-LoRA-PPS.ipynb notebook and get the following error on the Train LoRA step: RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
The text was updated successfully, but these errors were encountered:
Thanks for the quick response! It worked with P6000 and 24 GiB VRAM. Save_VRAM option is enabled, and there in the comment it says that 10GB VRAM should be enough for LoRA_Dim = 64, while on 16GB P5000 also "OutOfMemoryError: CUDA out of memory". Is there any other settings I can tweak to fit 16GB vram?
I'm running on Paperspace on RTX 5000, SDXL-LoRA-PPS.ipynb notebook and get the following error on the Train LoRA step:
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
The text was updated successfully, but these errors were encountered: