This repository is the original implemetation RAPiD-REPP, RAPiD-FA and RAPiD-FGFA. Details of these algorithms can be found on our project website and the paper will be published at WACV 2022
- RAPiD-T is implemeted by combining RAPiD with the SOTA object tracking algorithms designed for side-view regular cameras, namely REPP and FGFA.
- This repository is forked from the original repository of RAPiD.
- For REPP, we used the official implementation and adapted it to rotated bounding boxes.
- For FA and FGFA, we implemented our own versions based on our best understanding.
-
You can download the weights of trained models from Google Drive. Place these weight files in
weights
folder. -
The above code can be tested using example frames that can be downloaded from Google Drive. You need to unzip this file and put
warehouse_samples
inexamples
. -
If you want to use your own dataset, please make sure that your frames are named as
<video_name>.<6 digit frameid>.png
. -
Follow the steps in
inference/RAPiD-REPP.ipynb
,inference/RAPiD-FA.ipynb
andinference/RAPiD-FGFA.ipynb
to compute the detections and produce a video with detections superimposed on the frames.
RAPiD-T source code is available for non-commercial use. If you find our code and WEPDTOF dataset useful or publish any work reporting results using this source code, please consider citing our paper
M.O. Tezcan, Z. Duan, M. Cokbas, P. Ishwar, and J. Konrad, “WEPDTOF: A Dataset and Benchmark
Algorithms for In-the-Wild People Detection and Tracking from Overhead Fisheye Cameras”
in Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV), 2022.