Skip to content
This repository has been archived by the owner on Nov 21, 2023. It is now read-only.

Save model for inference. How to generate *.prototxt? #373

Closed
IgorKasianenko opened this issue Apr 17, 2018 · 1 comment
Closed

Save model for inference. How to generate *.prototxt? #373

IgorKasianenko opened this issue Apr 17, 2018 · 1 comment

Comments

@IgorKasianenko
Copy link

Hello, my goal is to run Maks R-CNN inference independently without depending on Detectron code base.

First approach was to convert model to *.pb, but at the moment FPN is not supported even that caffe2 team is working on it.

Meanwhile I'd like to run the model in C++, according to nVidia guide. Their sample uses FasterRCNN, from this source py-faster-rcnn that has been deprecated and leads to Detectron repo. Nevertheless, py-faster-rcnn has very useful file test.prototxt, with all layers description. That file is needed to write cpp plugins for custom layers, that can't be translated.

My question is how can I generate or find maskrcnn.prototxt file for Detectron, for two solutions, VGG16 and ResNet101?

I assume in case of ResNet101 it would be much bigger, as Detectron is not only adding masking to FasterRCNN, but also adds Feature Pyramid Networks.

@rbgirshick
Copy link
Contributor

The only viable route is to use the convert_pkl_to_pb.py script, but as you noted it does not yet support the full set of models that Detectron can train.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants