-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use torch.inference_mode vs torch.no_grad for predictions #1316
Comments
I am more than happy to create a PR for this feature too. |
Hi @thetonus 👋, Thanks for the request and sure feel free to open a PR 😊 |
@thetonus Part of this PR would be to add a short benchmark about inference latency and memory usage compared to the current state Code to modify: doctr/doctr/models/predictor/pytorch.py Line 62 in 7ab0ece
doctr/scripts/detect_artefacts.py Line 40 in 7ab0ece
|
🚀 The feature
I think it would be a good idea to create a decorator (similar to the one in Yolov8) that automatically applies the correct context manager for forward passes for code using the torch backend.
Motivation, pitch
Since torch>=1.9.0, there is a new context manager for inference -
torch.inference_mode
. This would allow for inferences to be faster and more memory conserved than just usingtorch.no_grad
.From Pytorch Docs:
Alternatives
None. Since Doctr is backwards compatible with older torch versions, there is no way to dynamically use
torch.inference_mode
vstorch.no_grad
.Additional context
No response
The text was updated successfully, but these errors were encountered: