yolov4 is here.
The Pytorch implementation is ultralytics/yolov3. It provides two trained weights of yolov3-spp, yolov3-spp.pt
and yolov3-spp-ultralytics.pt
(originally named ultralytics68.pt
).
Following tricks are used in this yolov3-spp:
- Yololayer plugin is different from the plugin used in this repo's yolov3. In this version, three yololayer are implemented in one plugin to improve speed, codes derived from lewes6369/TensorRT-Yolov3
- Batchnorm layer, implemented by scale layer.
1. generate yolov3-spp_ultralytics68.wts from pytorch implementation with yolov3-spp.cfg and yolov3-spp-ultralytics.pt, or download .wts from model zoo
git clone https://github.com/wang-xinyu/tensorrtx.git
git clone https://github.com/ultralytics/yolov3.git
// download its weights 'yolov3-spp-ultralytics.pt'
cd yolov3
cp ../tensorrtx/yolov3-spp/gen_wts.py .
python gen_wts.py yolov3-spp-ultralytics.pt
// a file 'yolov3-spp_ultralytics68.wts' will be generated.
// the master branch of yolov3 should work, if not, you can checkout 4ac60018f6e6c1e24b496485f126a660d9c793d8
2. put yolov3-spp_ultralytics68.wts into yolov3-spp, build and run
mv yolov3-spp_ultralytics68.wts ../tensorrtx/yolov3-spp/
cd ../tensorrtx/yolov3-spp
mkdir build
cd build
cmake ..
make
sudo ./yolov3-spp -s // serialize model to plan file i.e. 'yolov3-spp.engine'
sudo ./yolov3-spp -d ../samples // deserialize plan file and run inference, the images in samples will be processed.
3. check the images generated, as follows. _zidane.jpg and _bus.jpg
- Input shape defined in yololayer.h
- Number of classes defined in yololayer.h
- FP16/FP32 can be selected by the macro in yolov3-spp.cpp
- GPU id can be selected by the macro in yolov3-spp.cpp
- NMS thresh in yolov3-spp.cpp
- BBox confidence thresh in yolov3-spp.cpp
See the readme in home page.