-
Notifications
You must be signed in to change notification settings - Fork 454
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak on inference #1418
Comments
Hi @felixdittrich92, unfortunately this problem still occurs.
Then when I do |
Mh yeah i see does this leak only exits for the CRNN models could you test it also with |
@TomekPro Have you tried to pass the paths as list to I agree after posting your plots that's still a bug (maybe on pytorch) so only an idea you could try in the meanwhile |
And another thing you can try: #1356 (comment) |
You can also disable multiprocessing which should also lower the RAM usage a bit |
But yeah i think we need to profile it more detailed again to find the real bottleneck |
@felixdittrich92 finally, three things are needed to fix this memory leak:
Thanks for your help :) |
Nice 👍 Btw. using a smaller detection model will again reduce the mem usage for example |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Bug description
Running doctr for multiple images in a loop causes massive memory leak.
Code snippet to reproduce the bug
Runned in the following way:
mprof run python test.py
mprof plot
When I modified a loop so that model was initialized as well in the loop problem was still present.
Diving into the code it seems that the problem is caused by the actual pytorch inference. For example here:
Error traceback
As showed in the plot above.
Environment
Tested on empty poetry environment with just 2 packages installed:
pip install "python-doctr[torch]"
pip install memory_profiler
python 3.8.10
python-doctr 0.7.0
Ubuntu 20.04
Running on cpu
Deep Learning backend
is_tf_available: False
is_torch_available: True
The text was updated successfully, but these errors were encountered: