Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory issue crnn #1356

Closed
NikitaPomies opened this issue Oct 18, 2023 · 3 comments · Fixed by #1357
Closed

Memory issue crnn #1356

NikitaPomies opened this issue Oct 18, 2023 · 3 comments · Fixed by #1357
Labels
type: bug Something isn't working

Comments

@NikitaPomies
Copy link

NikitaPomies commented Oct 18, 2023

Bug description

Hello everyone

I am running this function called ocr_image :
Capture d’écran du 2023-10-18 22-58-36
with the model being loaded with the following code (I use cpu) :

  `        


    self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")


    det_model = db_resnet50(pretrained=False, pretrained_backbone=False)
    det_params = torch.load(path_model_detection, map_location=self.device)
    det_model.load_state_dict(det_params)
    reco_model = crnn_vgg16_bn(pretrained=False, pretrained_backbone=False)
    reco_params = torch.load(path_model_recognition, map_location=self.device)
    reco_model.load_state_dict(reco_params)

    self.model = ocr_predictor(
        det_arch=det_model,
        reco_arch=reco_model,
        pretrained=False,
        assume_straight_pages=True,
    ).to(self.device)

The memory is allocated in the crnn class in the forward function when creating "features" (See line 223) ,
The memory should be freed when we go outside the local scope of the forward function, right ? :

Capture d’écran du 2023-10-18 22-59-26

The memory is never freed and running ocr_image multiple times results in a huge increase of ram usage.

Do someone have any insights on that ?
I am using last doctr main version and torch==1.12.0+cpu on Linux-5.15.0-58-generic-x86_64-with-glibc2.35

Code snippet to reproduce the bug

"""

Error traceback

"""

Environment

"""

Deep Learning backend

is_tf_available: False
is_torch_available: True

@NikitaPomies NikitaPomies added the type: bug Something isn't working label Oct 18, 2023
@felixdittrich92
Copy link
Contributor

Hi @NikitaPomies 👋,

Thanks for the report.
I was able to reproduce the behaviour and will open a PR to fix this soon. 👍

@NikitaPomies
Copy link
Author

Hi @NikitaPomies 👋,

Thanks for the report.

I was able to reproduce the behaviour and will open a PR to fix this soon. 👍

Thanks !
Just for the record,
I have been able to avoid the high memory usage by putting

ONEDNN_PRIMITIVE_CACHE_CAPACITY = 1

as suggested on pytorch/pytorch#29893

@felixdittrich92
Copy link
Contributor

Thanks for the response 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants