Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use torch.inference_mode vs torch.no_grad for predictions #1316

Closed
thetonus opened this issue Sep 14, 2023 · 3 comments · Fixed by #1323
Closed

Use torch.inference_mode vs torch.no_grad for predictions #1316

thetonus opened this issue Sep 14, 2023 · 3 comments · Fixed by #1323
Labels
framework: pytorch Related to PyTorch backend module: models Related to doctr.models type: enhancement Improvement

Comments

@thetonus
Copy link

🚀 The feature

I think it would be a good idea to create a decorator (similar to the one in Yolov8) that automatically applies the correct context manager for forward passes for code using the torch backend.

Motivation, pitch

Since torch>=1.9.0, there is a new context manager for inference - torch.inference_mode. This would allow for inferences to be faster and more memory conserved than just using torch.no_grad.

From Pytorch Docs:

Code run under this mode gets better performance by disabling view tracking and version counter bumps. Note that unlike some other mechanisms that locally enable or disable grad, entering inference_mode also disables to forward-mode AD.

Alternatives

None. Since Doctr is backwards compatible with older torch versions, there is no way to dynamically use torch.inference_mode vs torch.no_grad.

Additional context

No response

@thetonus thetonus added the type: enhancement Improvement label Sep 14, 2023
@thetonus
Copy link
Author

I am more than happy to create a PR for this feature too.

@felixdittrich92
Copy link
Contributor

Hi @thetonus 👋,

Thanks for the request and sure feel free to open a PR 😊
We can use the decorator directly because docTR requires torch>=1.12 so this should be fine 👍

@felixdittrich92 felixdittrich92 added module: models Related to doctr.models framework: pytorch Related to PyTorch backend type: new feature New feature and removed type: new feature New feature labels Sep 15, 2023
@felixT2K
Copy link
Contributor

felixT2K commented Sep 15, 2023

@thetonus Part of this PR would be to add a short benchmark about inference latency and memory usage compared to the current state

Code to modify:

@torch.no_grad()






@torch.no_grad()





@thetonus thetonus changed the title USe torch.inference_mode vs torch.no_grad for predictions Use torch.inference_mode vs torch.no_grad for predictions Sep 15, 2023
@felixdittrich92 felixdittrich92 linked a pull request Sep 19, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
framework: pytorch Related to PyTorch backend module: models Related to doctr.models type: enhancement Improvement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants