- 9/16/2021: QAConv 2.1: simplify graph sampling, implement the Einstein summation for QAConv, use the batch hard triplet loss, design an adaptive epoch and learning rate scheduling method, and apply the automatic mixed precision training.
- 4/1/2021: QAConv 2.0 [2]: include a new sampler called Graph Sampler (GS), and remove the class memory. This version is much more efficient in learning. See the updated results.
- 3/31/2021: QAConv 1.2: include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).
- 2/7/2021: QAConv 1.1: an important update, which includes a pre-training function for a better initialization, so that the results are now more stable.
- 11/26/2020: Include the IBN-Net as backbone, and the RandPerson dataset.
Updated performance (%) of QAConv under direct cross-dataset evaluation without transfer learning or domain adaptation:
Training Data | Version | Training Time (h) | CUHK03-NP | Market-1501 | MSMT17 | |||
Rank-1 | mAP | Rank-1 | mAP | Rank-1 | mAP | |||
Market | QAConv 1.0 | 1.33 | 9.9 | 8.6 | - | - | 22.6 | 7.0 |
QAConv 1.1 | 1.02 | 12.4 | 11.3 | - | - | 35.6 | 12.2 | |
QAConv 1.2 | 1.07 | 13.3 | 14.2 | - | - | 40.9 | 14.7 | |
QAConv 2.0 | 0.68 | 16.4 | 15.7 | - | - | 41.2 | 15.0 | |
MSMT | QAConv 1.2 | 2.37 | 15.6 | 16.2 | 72.9 | 44.2 | - | - |
QAConv 2.0 | 0.96 | 20.0 | 19.2 | 75.1 | 46.7 | - | - | |
MSMT (all) | QAConv 1.0 | 26.90 | 25.3 | 22.6 | 72.6 | 43.1 | - | - |
QAConv 1.1 | 18.16 | 27.1 | 25.0 | 76.0 | 47.9 | - | - | |
QAConv 1.2 | 17.85 | 25.1 | 24.8 | 79.5 | 52.3 | - | - | |
QAConv 2.0 | 3.88 | 27.2 | 27.1 | 80.6 | 55.6 | - | - | |
RandPerson | QAConv 1.1 | 12.05 | 12.9 | 10.8 | 68.0 | 36.8 | 36.6 | 12.1 |
QAConv 1.2 | 12.22 | 12.6 | 12.1 | 73.2 | 42.1 | 41.8 | 13.8 | |
QAConv 2.0 | 1.84 | 14.8 | 13.4 | 74.0 | 43.8 | 42.4 | 14.4 |
Version Difference:
Version | Backbone | IBN Type | Pre-trials | Loss | Sampler | Data Augmentation |
---|---|---|---|---|---|---|
QAConv 1.0 | ResNet-50 | None | x | Class Memory | Random | Old |
QAConv 1.1 | ResNet-50 | b | √ | Class Memory | Random | Old |
QAConv 1.2 | ResNet-50 | b | √ | Class Memory | Random | New |
QAConv 2.0 | ResNet-50 | b | x | Pairwise Matching | GS | New |
Notes:
- Except QAConv 1.0, the other versions additionally include three IN layers as in IBN-Net-b.
- QAConv 1.1 and 1.2 additionally include a pre-training function with 10 trials to stable the results.
- QAConv 1.2 and 2.0 additionally apply some popular data augmentation methods.
- QAConv 2.0 applies the GS sampler and the pairwise matching loss.
- QAConv 1.0 results are obtained by neck=128, batch_size=32, lr=0.01, epochs=60, and step_size=40, trained with two V100.
- QAConv 1.1 and 1.2 results are obtained by neck=64, batch_size=8, lr=0.005, epochs=15, and step_size=10 (except for RandPerson epochs=4 and step_size=2), trained on one single V100.
- QAConv 2.0 results are obtained by neck=64, batch_size=64, K=4, lr=0.001, epochs=15, and step_size=10 (except for RandPerson epochs=4 and step_size=2), trained on one single V100.