Model | Params(M) | Top-1 Error(%) |
---|---|---|
ResNet-101 + Mix Attention | 50.07 | 6.17 |
WRN-18 + Mix Attention | 27.11 | 4.77 |
Model | Params(M) | Top-1 Error(%) |
---|---|---|
ResNet-101 + Mix Attention | 50.07 | 23.19 |
WRN-18 + Mix Attention | 27.11 | 19.11 |
Model | Params(M) | Top-1 Accuracy(%) |
---|---|---|
MANet-B | 69.3 | 81.7 |
MANet-S | 23.4 | 78.3 |
MANet-T | 4.3 | 73.1 |
Model | Params(M) | Top-1 Accuracy(%) |
---|---|---|
MANet-B | 69.3 | 97.2 |
MANet-S | 23.4 | 95.1 |
MANet-T | 4.3 | 93.4 |
Model | Params(M) | Top-1 Accuracy(%) |
---|---|---|
MANet-B | 69.3 | 88.7 |
MANet-S | 23.4 | 86.5 |
MANet-T | 4.3 | 81.6 |
Model | Backbone | Params(M) | Latency(ms) | AP |
---|---|---|---|---|
CAT-YOLO-v1 | CSPDarknet53-Tiny | 6.16 | 9.9(TITAN RTX) | 24.1 |
CAT-YOLO-v2 | MANet-T | 9.17 | 12.7(TITAN RTX) | 25.7 |
CAT-YOLO-v3 | MANet-T | 12.5 | 16.8(TITAN RTX) | 33.5 |
Note:
-
We divide the modules and backbones for CIFAR and ImageNet respectively.
-
The sub-folder named "Big Version" in Modules play the role of one individual layer.
-
The sub-folder named "Tiny Version" in Modules play the role of the enhance module in the network's bottleneck.
@article{guan2022man,
title={MAN and CAT: mix attention to nn and concatenate attention to YOLO},
author={Guan, Runwei and Man, Ka Lok and Zhao, Haocheng and Zhang, Ruixiao and Yao, Shanliang and Smith, Jeremy and Lim, Eng Gee and Yue, Yutao},
journal={The Journal of Supercomputing},
pages={1--29},
year={2022},
publisher={Springer}
}