论文标题

探索多元化保留的知识阶段的渠道间相关性

Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation

论文作者

Liu, Li, Huang, Qingle, Lin, Sihao, Xie, Hongwei, Wang, Bing, Chang, Xiaojun, Liang, Xiaodan

论文摘要

知识蒸馏表明,在转移学习的表示形式(老师)到较小的一个(学生)方面表现出非常有希望的疲劳性。DespiteMany的努力,先前的方法忽略了忽略特征相关性相关性的重要作用,导致缺乏捕获特征范围的插入式属性的材料,以解决theetery的插图,以解决theTern的特征。知识蒸馏(ICKD)的相关性,学生网络的fea-ture空间的多样性和同源性可以与教师网络的相一致。如果这些两道通道之间的相关性是多样性的,如果它们彼此无关,否则是同源性。然后,学生需要模仿自己的嵌入式空间中的相关性。此外,我们介绍了网格级通道间的相关性,使其能够具有密集的预测任务。关于两项视觉任务的广泛实验,包括ImageNet分类和Pascal VOC分割,证明了我们的ICKD的优越性,它们的优越性胜过许多现有方法,在知识蒸馏领域中推进了thestate的前进。 TOOUR知识,我们是基于知识边缘蒸馏的第一种方法,超过72%的Imagenet分类中的Top-1 Ac-抑制剂。代码可在以下网址获得:https://github.com/adlab-autodrive/ickd。

Knowledge Distillation has shown very promising abil-ity in transferring learned representation from the largermodel (teacher) to the smaller one (student).Despitemany efforts, prior methods ignore the important role ofretaining inter-channel correlation of features, leading tothe lack of capturing intrinsic distribution of the featurespace and sufficient diversity properties of features in theteacher network.To solve the issue, we propose thenovel Inter-Channel Correlation for Knowledge Distillation(ICKD), with which the diversity and homology of the fea-ture space of the student network can align with that ofthe teacher network. The correlation between these twochannels is interpreted as diversity if they are irrelevantto each other, otherwise homology. Then the student isrequired to mimic the correlation within its own embed-ding space. In addition, we introduce the grid-level inter-channel correlation, making it capable of dense predictiontasks. Extensive experiments on two vision tasks, includ-ing ImageNet classification and Pascal VOC segmentation,demonstrate the superiority of our ICKD, which consis-tently outperforms many existing methods, advancing thestate-of-the-art in the fields of Knowledge Distillation. Toour knowledge, we are the first method based on knowl-edge distillation boosts ResNet18 beyond 72% Top-1 ac-curacy on ImageNet classification. Code is available at:https://github.com/ADLab-AutoDrive/ICKD.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源