From a078f5bf6ae9fbeecbc1384479d5f02ab8b9e7f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=B8=85=E6=99=A8=E7=9A=84=E9=9B=BE?= <45256257+ret-1@users.noreply.github.com> Date: Mon, 21 Aug 2023 10:36:11 +0800 Subject: [PATCH] add TrackEval & update readme (#5) --- .gitmodules | 3 +++ README.md | 47 ++++++++++++++++++++++++++++++++++++++++++++++- TrackEval | 1 + 3 files changed, 50 insertions(+), 1 deletion(-) create mode 100644 .gitmodules create mode 160000 TrackEval diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000..0bc7c6b --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "TrackEval"] + path = TrackEval + url = https://github.com/ret-1/TrackEval diff --git a/README.md b/README.md index ae4ef7c..e8efedd 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ MixSort is the proposed baseline tracker in [**SportMOT**](https://github.com/MC ## Installation ```shell -git clone https://github.com/MCG-NJU/MixSort +git clone --recursive https://github.com/MCG-NJU/MixSort cd MixSort conda create -n MixSort python=3.8 @@ -196,6 +196,51 @@ For `iou`, use `track_byte.py` with `--iou_only` option. Use [TrackEval](https://github.com/JonathonLuiten/TrackEval) for detailed evaluation. +We have integrated TrackEval as a submodule in this repo to help you evaluate the results easily. If you haven't cloned this repo with `--recursive` option, you can run the following command to get TrackEval: + +```shell +cd +git submodule update --init --recursive +``` + +`TrackEval/data` has following structure: + +``` +data +├── gt +│ └── mot_challenge +│ └── sports-val +│ └── ... (other datasets) +│ └── seqmaps +│ └── sports-val.txt +│ └── ... (other datasets) +└── trackers + └── mot_challenge + ├── sports-val + │ ├── + │ │ └── data + │ │ └── .txt + │ │ └── ... (other sequences) + │ └── ... (other exps) + └── ... (other datasets) +``` + +For example, if you want to evaluate on SportsMOT validation set, you could create symbolic link as follows: + +```shell +cd +ln -s datasets/SportsMOT/val TrackEval/data/gt/mot_challenge/sports-val +``` + +And then put the tracking results in `TrackEval/data/trackers/mot_challenge/sports-val//data` or create a symbolic link to the tracking results. Finally, you can run the following command to evaluate the results: + +```shell +cd +python TrackEval/scripts/run_mot_challenge.py --BENCHMARK sports --SPLIT_TO_EVAL val --TRACKERS_TO_EVAL +# For MOT17 validation set, you should add the following option: --GT_LOC_FORMAT '{gt_folder}/{seq}/gt/gt_val_half.txt' +``` + +We have also provided a python method in `TrackEval/scripts/eval_mot.py` to help you evaluate the results more conveniently. You can refer to it for more details. ## Citation diff --git a/TrackEval b/TrackEval new file mode 160000 index 0000000..5d466f2 --- /dev/null +++ b/TrackEval @@ -0,0 +1 @@ +Subproject commit 5d466f26e5cd60d1b9ce73d53a3e6a0066e898cc