Skip to content

Latest commit

 

History

History
37 lines (25 loc) · 7.1 KB

README.md

File metadata and controls

37 lines (25 loc) · 7.1 KB

CWD

Channel-wise Knowledge Distillation for Dense Prediction

Abstract

Knowledge distillation (KD) has been proven to be a simple and effective tool for training compact models. Almost all KD variants for dense prediction tasks align the student and teacher networks' feature maps in the spatial domain, typically by minimizing point-wise and/or pair-wise discrepancy. Observing that in semantic segmentation, some layers' feature activations of each channel tend to encode saliency of scene categories (analogue to class activation mapping), we propose to align features channel-wise between the student and teacher networks. To this end, we first transform the feature map of each channel into a probability map using softmax normalization, and then minimize the Kullback-Leibler (KL) divergence of the corresponding channels of the two networks. By doing so, our method focuses on mimicking the soft distributions of channels between networks. In particular, the KL divergence enables learning to pay more attention to the most salient regions of the channel-wise maps, presumably corresponding to the most useful signals for semantic segmentation. Experiments demonstrate that our channel-wise distillation outperforms almost all existing spatial distillation methods for semantic segmentation considerably, and requires less computational cost during training. We consistently achieve superior performance on three benchmarks with various network structures.

pipeline

Results and models

Segmentation

Location Dataset Teacher Student mIoU mIoU(T) mIou(S) Config Download
logits cityscapes pspnet_r101 pspnet_r18 75.54 79.76 74.87 config teacher |model | log

Detection

Location Dataset Teacher Student mAP mAP(T) mAP(S) Config Download
cls head COCO gfl_r101_2x gfl_r50_1x 41.9 44.7 40.2 config teacher |model | log

Citation

@inproceedings{shu2021channel,
  title={Channel-Wise Knowledge Distillation for Dense Prediction},
  author={Shu, Changyong and Liu, Yifan and Gao, Jianfei and Yan, Zheng and Shen, Chunhua},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={5311--5320},
  year={2021}
}