-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory - CUDA 12.3 #222
Comments
Are you trying the sample image or your own image? |
I am trying first on provided sample pictures, I checked their dimensions |
I'm working on linux, so I'm not sure how much shared memory I have, I just know about 8 GB VRAM (if I can provide something better, please tell me how). I installed also cuda 11.8, but I still have the cuda out of memory error. I tried to set: Everything works fine when I implemented it in Google Colab. And here only the mask saves, the actual output obviously not. I already set the sample size to 1 instead of 4. Is it possible to do anything? Or do I just have too little GPU memory? I'm using NVIDIA GeForce RTX 3050 |
It started working when I killed the process 869572, but only sometimes. I really do not know why, but sometimes it doesn't work even with the same exact pictures that it just have worked with. I use "watch nvidia-smi" all the time and see that sometimes it uses even less memory when it doesn't work than when it works. |
interestingly enough i have the same issue but with 12gb of vram and unlike yours it doesnt work whatsoever i kill. What was the process you killed and what cli options did you use? |
Unfortunately I can't help with issues on Linux. :( |
do you know how much VRAM this roughly takes up? |
Hey, is it possible to run this code for CUDA 12.3? I installed the right torch version for it. I keep getting this error about CUDA being out of memory. I tried setting max_split_size_mb to 64, but it didn't work. Do I need CUDA 11.8 to run this code? I have 8 GB VRAM, so I guess that's not an issue...?
The text was updated successfully, but these errors were encountered: