-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Detectron in C++ #199
Comments
Some of Detectron requires Python ops to function. However, there is support to export a limited number of models to straight Caffe2 C++ ops at https://github.com/facebookresearch/Detectron/blob/master/tools/convert_pkl_to_pb.py These models are also available in the Caffe2 model zoo at https://github.com/caffe2/models/tree/master/detectron Hope that helps. |
I would also like to run a detectron CNN in a C++ code (only for inference). I tried to use my classification code to load and run a net downloaded from https://github.com/caffe2/models/tree/master/detectron.
It seems to be a tensor size problem. 2 - In py-faster rcnn (caffe 1 + custom python layers), we needed to create two blobs "data" and "im_info". 3 - Did I miss any example program written in C++ for inference with detectron ? Thanks for your great work :) |
I solved problems 1 and 2. |
@dbrazey Hi,I'm working on writing a C++ code to run Detectron model. I've met with the similar problem with your problem 1. Can you give me some suggestions about this problem? Thanks a lot. |
Hello, I did the same process as @dbrazey, first I played with CNN for classification but right now I am facing the same problem that you commented. I tried to load the e2e_faster_rcnn_R-50-C4_2x in Caffe2 but this error pops up: RuntimeError: [enforce fail at generate_proposals_op.cc:205] im_info_tensor.dims() == (vector{num_images, 3}). vs Error from operator: I tried to look the "generate_proposals_op_test" but I couldn't work it out. It would be so helpful if you can give me some suggestions about the problem. Thank you! |
I think you get the error because you didn t create the "im_info" blob (or with a wrong size). In "generate_proposals_op_test", you have :
which means
|
Hi there, I wonder if the link https://github.com/facebookresearch/Detectron/blob/master/tools/convert_pkl_to_pb.py |
+1 |
@dbrazey Can you please provide a code snippet on how to run the object detection net (e.g. using e2e_faster_rcnn_R-50-C4_1x or similar networks). I was able to convert the models to .pb, and also provide "im_info" and "data" blobs to the workspace, but I still cannot run the network. Thanks! |
@SerinaWei You have an example here of how to run it https://github.com/daquexian/Detectron/blob/27f5aee53785af99147a634344d58a70bfbd250e/tools/convert_pkl_to_pb.py#L501-L549
@dongmingsun Not yet but I achieved it I will do a PR as soon as my code is clean enough. But it will not help alone to train/run the model in C++ you would need to implement a few more things. |
@gadcam I was able to use the .pb in python and would like to run the model in C++. I have set the "im_info" and "data" blobs. What else do I need to implement to run? If you can share your insight on this, I really appreciate it! |
@SerinaWei Sorry I read it too fast and did not check that you were speaking about C++. |
@gadcam I read leonard's tutorial, and it helps in classification in C++, but it doesn't look like it has any examples to use detection net. |
@rbgirshick Can you please provide insight/examples on how to use detection models (e2e_faster_rcnn_R-50-C4_1x or similar networks) in C++ in Detectron? I've been struggling for a while. Thanks! |
@SerinaWei how to provide "im_info" and "data" blobs to the workspace? Thanks |
Some code snippet. Actually it's not hard to find similar code in tutorials (however frankly speaking caffe2 tutorials is so rare)
Use all-zero fake input:
and if you want to read image as input (I use opencv to read image. SCALES, MAX_SIZE and FPN_COARSE_STRIDE have the same meaning as those in detectron's config.py, you can check out config.py for document):
|
@daquexian Thanks |
@daquexian pcontext = new caffe2::CUDAContext(option); unique_ptr predict_net;
} but errors happened after called terminate called after throwing an instance of 'caffe2::EnforceNotMet' Thanks |
@daquexian Thank you so much for the code snippet! I was able to get it to work with the all-zero fake input! I think the key is the padding. I didn't do padding before. |
@SerinaWei using cpu ? Can you please provide examples on how to use ? thanks |
@HappyKerry Yes, I am using CPU. Here is my code snippet (almost identical to daquxian's). I used the all-zero input to test the funcionality. =================================================================== caffe2::NetDef _initNet, _predictNet; ws.RunNetOnce(_initNet);
|
@SerinaWei @daquexian @orionr terminate called after throwing an instance of 'caffe2::EnforceNotMet' |
@HappyKerry It seems that BatchPermutation only have cpu implementation. You can register a cuda operator using GPUFallbackOp by yourself like https://github.com/pytorch/pytorch/blob/master/caffe2/sgd/lars_op_gpu.cu |
@daquexian Is the maximum size you can scale to on Android for detection is smaller than 1333? Thanks so much for your help! |
What I used is smaller than 1333. You can try whatever you want.
…On Sat, May 26, 2018, 1:25 AM SerinaWei ***@***.***> wrote:
@daquexian <https://github.com/daquexian> Is the maximum size you can
scale to on Android for detection is smaller than 1333? Thanks so much for
your help!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#199 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALEcn7uzImgWgiJFVCQWbAy5dd4yZq9-ks5t2D5vgaJpZM4SU1bM>
.
|
@daquexian Good to know! I had to use a size smaller than 1333 as well. Otherwise it hangs there. |
I think I am starting to lose my mind over this but I have trying countless times to run the caffe2's detectron model found in the model zoo in C++ with GPU but I keep getting the same error:
If I try it on an FPN model, I get a different operation in the error message (like BBoxTransform). I have linked everything in my CMakeLists (caffe2, caffe2_gpu, detectron_ops_gpu and cuda libs). Here's even a code snippet : caffe2::DeviceOption option;
option.set_device_type(caffe2::CUDA);
option.set_cuda_gpu_id(0);
new caffe2::CUDAContext(option);
caffe2::NetDef init_model, predict_model;
CAFFE_ENFORCE(ReadProtoFromFile("init_net.pb", &init_model));
CAFFE_ENFORCE(ReadProtoFromFile("predict_net.pb", &predict_model));
init_model.mutable_device_option()->set_device_type(caffe2::CUDA);
predict_model.mutable_device_option()->set_device_type(caffe2::CUDA);
caffe2::Workspace workspace("tmp");
workspace.RunNetOnce(init_model);
caffe2::NetBase* net = workspace.CreateNet(predict_model); // line where it fails Everything works if I run it in CPU but it takes about 20 seconds of inference time on a i7 quad core @ 3.4 GHz and so I would like to run it on a NVIDIA GTX 1080 Ti. Does anybody have a clue on what is going on here ? |
@ferasboulala how did you get these pb files ? |
I exported them with the |
@gadcam If anyone could share an example working code (and potentially CMakeLists while we are at it) with GPU in C++, that'd be amazing. |
@ferasboulala same problem here ... After many researches, i don't find any C++ CUDA examples ... Are we the only ones which use c++ ? |
@ferasboulala In cpu is ok,but in gpu it happened after called CreateNet(predict_net_def); |
Hello,
I would like to use Detectron in C++. Is there any way to do it?
Thanks.
The text was updated successfully, but these errors were encountered: