-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to save Detectron model as Vanilla Pytorch model? #4589
Comments
Hi Mahesh. I have the same problem. Did you find any solution for that? |
@mehi64 Yes, actually the model we save is the pytorch model only. But it supports 1 image inference only. See the
BUILD CONFIG File
MODEL CLASS
This worked for me. You can do the inference as:
Explainability using GRADCAM
|
@deshwalmahesh @mehi64 I am kind of in the same boat. I have couple trained image segmentation models (MaskDINO and Mask2Former) that I trained using detectron2 framework and I have the final trained model as pth file (model_final.pth) obtained from training. But, now I want to run this model in a Docker container that only has Pytorch installed but not detectron2. Is there a way to take these models and run them inside pure Pytorch without requiring detectron2? Any help on this highly appreciated. Thanks. |
@judahkshitij You can load the Detectron |
Thanks, @deshwalmahesh for sharing the code. I have a question: the class (TorchModel) you've created is also importing functions from detectron2. Does this mean that during inference, we will also need detectron2? |
Hello everyone, one year later I have the same question of @rameshjes |
Sup everyone. I have also looked for a fine solution to this problem. Your best bet will be to convert detectron2 model with their script created exactly for this case: detectron2/tools/deploy/export_model.py. Just convert detectron2 model to torchscript. After the conversion this model will output results in a different format than for example GeneralizedRCNN, so you will have to look around in the source code and GitHub issues, to format the output in the same way as before the conversion. Also, the resulting models can have some underlying issues. Mine for example takes up twice as much vram than before the conversion. Converting detectron2 model to onnx is even more difficult. |
I have a
Faster-RCNN
model trained withDetectron2
. Model weights are saved asmodel.pth
.I have my
config.yml
file and there are a couple of ways to load this model:Also, you can get predictions from this model individually as given in official documentation:
running the following commands:
Gives you:
Problem: I want to use GradCam for model explainability and it uses
pytorch
models as given in this tutorialHow can I turn
detectron2
model in vanillapytorch
model?I have tried:
but obviously, I'm getting errors due to the different layer names and sizes etc.
The text was updated successfully, but these errors were encountered: