Skip to content

Latest commit

 

History

History
163 lines (124 loc) · 6.39 KB

README.md

File metadata and controls

163 lines (124 loc) · 6.39 KB

CVPRW2023: Enhancing Multi-Camera People Tracking with Anchor-Guided Clustering and Spatio-Temporal Consistency ID Re-Assignment

This is the official repository for the 7th NVIDIA AI City Challenge (2023) Track 1: Multi-Camera People Tracking. [Arxiv]

Dataset Availability

The official dataset can be downloaded from the AI City Challenge website (https://www.aicitychallenge.org/2023-data-and-evaluation/). You need to fill out the dataset request form to obtain the password to download them.

Referring to the DATASET LICENSE AGREEMENT from the dataset author(s), we are not allowed to share the dataset.

2.c. ... you may not copy, sell, rent, sublicense, transfer or distribute the DATASET, or share with others.  

Ranking

Overall Pipeline

Environment Requirements

The implementation of our work is built upon BoT-SORT, OpenMMLab, and torchreid. We also adapt Cal_PnP for camera self-calibration.

Four different environments are required for the reproduction process. Please install these three environments according to the following repos:

  1. Installation for mmyolo*
  2. Installation for mmpose
  3. Installation for torchreid*
  4. Installation for BoT-SORT

* optional for fast reproduce

Training

Train Synthetic Detector (skip for fast reproduction)

  1. Prepare MTMC Dataset and annotations Download AIC23_Track1_MTMC_Tracking.zip from AICity organizer and unzip under the root directory of this repo and run:
bash scripts/0_prepare_mtmc_data.sh

You should see the data folder organized as follows:

data
├── annotations
│   ├── fine_tune
│   │   ├── train_hospital_val_hospital_sr_20_0_img_15197.json
│   │   ├── train_market_val_market_sr_20_0_img_19965.json
│   │   ├── train_office_val_office_sr_20_0_img_20696.json
│   │   └── train_storage_val_storage_sr_20_0_img_15846.json
│   └── train_all_val_all_sr_20_10_img_77154.json
├── train
│   ├── S002
│   │   ├── c008
│   │   │   ├── frame
│   │   │   ├── label.txt
│   │   │   └── video.mp4
│   .   .
│   .   .
├── validation
└── train
  1. Train yolov7 models on synthetic data
bash scripts/1_train_detector.sh

* Note that the configs we provided are the ones we used in our submission. They may not be optimized for your GPU, please adjust the batchsize accordingly.

Train Synthetic ReID Model (skip for fast reproduction)

  1. Prepare ReID Dataset
mkdir deep-person-reid/reid-data

Download our sampled dataset and unzip it under deep-person-reid/reid-data.

* Note that the file name DukeMTMC is just for training convenience, the DukeMTMC dataset is not used in our training process.

  1. Train Reid model on synthetic data
bash 2_train_reid.sh

Inferencing

Get Detection (skip for fast reproduction)

  1. To Fast Reproduce

Directly use the txt files in the data/test_det folder and skip the following steps.

  1. Prepare Models
  1. Get Real (S001) detection
bash scripts/3_inference_det_real.sh
  1. Get Synthetic detection
bash scripts/4_inference_det_syn.sh

Get Embedding

  1. To Fast Reproduce Download the embedding npy files and put all the npy files under data/test_emb, then you can skip step 1 and 2.

  2. Prepare Models (optional)

  1. Get Appearance Embedding (optional)
bash scripts/5_inference_emb.sh

Run Tracking

The root_path for the following command should be set to the repo's location

  1. Navigate to the BoT-SORT folder
cd BoT-SORT
  1. Run tracking
conda activate botsort_env
python tools/run_tracking.py <root_path>
  1. Generate foot keypoint
conda activate mmpose
cd ../mmpose
python demo/top_down_video_demo_with_track_file.py <tracking_file.txt> \ 
       configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py \
       https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
       --video-path <video_file.mp4> \
       --out-file <out_keypoint.json>
python tools/convert.py
  1. Conduct spatio-temporal consistency reassignment
python STCRA/run_stcra.py <input_tracking_file_folder> <output_tracking_file_folder>
  1. Generate final submission
cd ../BoT-SORT
python tools/aic_interpolation.py <root_path>
python tools/boundaryrect_removal.py <root_path>
python tools/generate_submission.py <root_path>