论文标题
MIXSKD:从混合以进行图像识别的自我知识蒸馏
MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
论文作者
论文摘要
与传统的知识蒸馏(KD)不同,自我KD允许网络在没有额外网络的任何指导的情况下向自身学习知识。本文提议从图像混合物(Mixskd)执行自我KD,该图像将这两种技术集成到统一的框架中。 Mixskd相互提炼的图形和概率分布在随机的原始图像及其混合图像之间以有意义的方式进行。因此,它通过对混合图像进行监督信号进行建模来指导网络学习跨图像知识。此外,我们通过汇总多阶段特征图来构建一个自学老师网络,以提供软标签来监督骨干分类器,从而进一步提高自我增强的功效。图像分类和转移学习到对象检测和语义分割的实验表明,混合物的表现优于其他最先进的自我KD和数据增强方法。该代码可从https://github.com/winycg/self-kd-lib获得。
Unlike the conventional Knowledge Distillation (KD), Self-KD allows a network to learn knowledge from itself without any guidance from extra networks. This paper proposes to perform Self-KD from image Mixture (MixSKD), which integrates these two techniques into a unified framework. MixSKD mutually distills feature maps and probability distributions between the random pair of original images and their mixup images in a meaningful way. Therefore, it guides the network to learn cross-image knowledge by modelling supervisory signals from mixup images. Moreover, we construct a self-teacher network by aggregating multi-stage feature maps for providing soft labels to supervise the backbone classifier, further improving the efficacy of self-boosting. Experiments on image classification and transfer learning to object detection and semantic segmentation demonstrate that MixSKD outperforms other state-of-the-art Self-KD and data augmentation methods. The code is available at https://github.com/winycg/Self-KD-Lib.