论文标题
CEKD:增强细粒数据的交叉合奏知识蒸馏
CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data
论文作者
论文摘要
数据增强已被证明在培训深层模型中有效。现有数据增强方法通过将图像对并根据混合像素的统计数据融合并融合相应的标签来解决细粒度问题,这会产生对网络性能有害的额外噪声。在此激励的情况下,我们提出了一种简单而有效的交叉集合知识蒸馏(CEKD)模型,用于细粒度学习。我们创新提出了一个交叉蒸馏模块,以提供更多的监督以减轻噪声问题,并提出一个协作合奏模块来克服目标冲突问题。提出的模型可以以端到端的方式进行训练,并且只需要图像级标签监督。对广泛使用的细粒基准进行的广泛实验证明了我们提出的模型的有效性。具体而言,在RESNET-101的骨架上,CEKD分别获得了三个数据集中的89.59%,95.96%和94.56%的准确性,表现优于最先进的API-NET,高于0.99%,1.06%和1.16%。
Data augmentation has been proved effective in training deep models. Existing data augmentation methods tackle the fine-grained problem by blending image pairs and fusing corresponding labels according to the statistics of mixed pixels, which produces additional noise harmful to the performance of networks. Motivated by this, we present a simple yet effective cross ensemble knowledge distillation (CEKD) model for fine-grained feature learning. We innovatively propose a cross distillation module to provide additional supervision to alleviate the noise problem, and propose a collaborative ensemble module to overcome the target conflict problem. The proposed model can be trained in an end-to-end manner, and only requires image-level label supervision. Extensive experiments on widely used fine-grained benchmarks demonstrate the effectiveness of our proposed model. Specifically, with the backbone of ResNet-101, CEKD obtains the accuracy of 89.59%, 95.96% and 94.56% in three datasets respectively, outperforming state-of-the-art API-Net by 0.99%, 1.06% and 1.16%.