-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot decensor large video files #12
Comments
Hello, Try to close out as many processes as possible that are consuming VRAM. |
Hi, Splitting the video into segments works, it's just a pain in the ass to work with. Even then, a limit of only 450MB seems unjustified, there should be an option to increase this limit. Also, this program was the only program consuming a noticeable amount of VRAM at the time of testing |
Can you view the Task Manager's Performance tab, and see what the dedicated GPU memory usage is? My 2060 super utilizes 6.7 to 6.8GB of vram. Keep in mind that loading and propogating any frame or image through a convolutional nueral net takes a massive amount of memory, that is not something I can change. The typical research gpus can have up to 12-16gb of vram available. This is why it is essential that you close out all possible background apps. Otherwise, if its not the vram allocation then it could be some general memory allocation issue for the long video. Because each frame of the video must be processed, I highly recommend trimming the video to avoid making the AI waste processing on uncensored clips. |
Here's what copying the performance tab outputs:
|
The frustrating thing about this is that this error occurs after a bunch of hours, somewhere amidst the process. There is not even a hint about what those 450MB would refer to, since neither the GPU nor the RAM are particularly busy with this task |
This is why you should edit the clips (Windows default has a video trimmer), because many of those hours are spent on trying to decensor a bunch of scenes that have nothing to censor. I don't know why your utilization is so low, but there are two phases to decensoring: The first phase uses ESRGAN and performs resizing to calculate a decensored approximation, then the second phase determines what area needs to be decensored. This might be using your CPU, as there are some issues with this old ESRGAN architecture, CUDA, and the rtx Turing architecture. The second phase uses MaskRCNN, which i have better optimized for GPU usage. Either case, try out the Google Colab notebook for videos, you may get assigned a Tesla P100 which is quite powerful. If the esrgan workaround issue is still present, you can still edit code from the colab. |
Just to be clear: I have already done the decensoring through splitting the video, with both ESRGAN and DCP, where DCP has better results, but is more time consuming to work with. It's just an annoyance to not be able to put the whole thing in and simply wait. |
In case you're interested in the results: |
While testing the decensoring of a large video file (906MB, length: 22:16), I have run into an issue where the program stops
working while throwing an error that is has reached a memory limit of about 450MB. Previous testing with a much lighter file (7.25MB, length: 0:27) reveals that the program does indeed work as intended (provided you use a workaround for issue #11).
This is far below what my machine should be capable of handling, I would at least have expected it to do a few gigabytes, given that I have 32GB of RAM and 11GB of VRAM.
PC specs:
CPU-Z full PC specs:
specs.txt
Here's the relevant commandline output:
Full output:
output.txt
The text was updated successfully, but these errors were encountered: