-
Beta Was this translation helpful? Give feedback.
Replies: 15 comments
-
Hi @TomekPro 👋, |
Beta Was this translation helpful? Give feedback.
-
Hi @felixdittrich92, unfortunately this problem still occurs.
Then when I do |
Beta Was this translation helpful? Give feedback.
-
Moving to torch 2.1 cpuonly: |
Beta Was this translation helpful? Give feedback.
-
Mh yeah i see does this leak only exits for the CRNN models could you test it also with |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@TomekPro Have you tried to pass the paths as list to I agree after posting your plots that's still a bug (maybe on pytorch) so only an idea you could try in the meanwhile |
Beta Was this translation helpful? Give feedback.
-
And another thing you can try: #1356 (comment) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
You can also disable multiprocessing which should also lower the RAM usage a bit |
Beta Was this translation helpful? Give feedback.
-
But yeah i think we need to profile it more detailed again to find the real bottleneck |
Beta Was this translation helpful? Give feedback.
-
@felixdittrich92 finally, three things are needed to fix this memory leak:
Thanks for your help :) |
Beta Was this translation helpful? Give feedback.
-
Nice 👍 Btw. using a smaller detection model will again reduce the mem usage for example |
Beta Was this translation helpful? Give feedback.
-
Converted to Q&A for other people who facing the same issue :) |
Beta Was this translation helpful? Give feedback.
-
same case happened to me when using DOCTR OCR for a long period of time using pytorch models, wrapping the model in the
I will try TomekPro's soloution and report back, however the issue still persists on 0.8.0 version |
Beta Was this translation helpful? Give feedback.
-
I've used @TomekPro's method for the past week on a production environment, with around 1000-1500 OCR requests per day, the versions are specified below: torch~=2.2.1 the memory leak has significantly reduced, however still present and VRAM leak stays the same.
not sure if this is to be expected or not, any suggestions? the only other solutions that might work are pytorch |
Beta Was this translation helpful? Give feedback.
@felixdittrich92 finally, three things are needed to fix this memory leak:
export DOCTR_MULTIPROCESSING_DISABLE=TRUE
export ONEDNN_PRIMITIVE_CACHE_CAPACITY=1
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cpu
Thanks for your help :)