You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have used caffe framework for training text detection model using GPU mode. There are 4 such models are running simultaneously for 4 different use-cases. Running more than 2 models giving me CUDA memory error, so i increased compute power of my AWS EC2 instance to g4dn.12xlarge which costing me much higher and it is going beyond my budget.
So i am trying to inferencing this GPU trained caffe model to run in CPU mode and throw output so that my cost will be minimum. Please note training the model in GPU mode is done, i just wanted to use this model to run in CPU mode or CPU EC2 instance and get the output.
Hi,
I have used caffe framework for training text detection model using GPU mode. There are 4 such models are running simultaneously for 4 different use-cases. Running more than 2 models giving me CUDA memory error, so i increased compute power of my AWS EC2 instance to g4dn.12xlarge which costing me much higher and it is going beyond my budget.
So i am trying to inferencing this GPU trained caffe model to run in CPU mode and throw output so that my cost will be minimum. Please note training the model in GPU mode is done, i just wanted to use this model to run in CPU mode or CPU EC2 instance and get the output.
I tried below option:
Step 1:
Made changes in file "psroi_pooling_layer.cpp" as mentioned in below link and saved it.
https://github.com/daijifeng001/caffe-rfcn/pull/10/files
Step 2:
Set "caffe.set_mode_cpu()" and ran the script and getting error like "check failure stack trace"
Changes in code:
Error screen-shot:
Please let me know where I am doing wrong or do I need to perform some additional steps?
Your quick response and help will be much appreciated.
Thank you,
Amar
The text was updated successfully, but these errors were encountered: