论文标题
通过不确定性感知混音,计算有效的知识蒸馏
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
论文作者
论文摘要
知识蒸馏涉及从教师网络中提取“黑暗知识”以指导学生网络的学习,这已成为模型压缩和转移学习的必要技术。与以前专注于学生网络准确性的作品不同,我们在这里研究了一个有点爆炸但重要的问题,即知识蒸馏效率。我们的目标是实现与传统知识蒸馏相当的性能,培训期间的计算成本较低。我们表明,不确定性感知的混音(UNIX)可以用作干净但有效的解决方案。不确定性抽样策略用于评估每个培训样本的信息性。自适应混合物用于不确定的样本以进行紧凑的知识。我们进一步表明,常规知识蒸馏的冗余在于过度学习简单样本。通过结合不确定性和混音,我们的方法可以减少冗余,并更好地利用每个查询的教师网络。我们验证了我们在CIFAR100和Imagenet上的方法。值得注意的是,只有79%的计算成本,我们的表现要优于CIFAR100上的常规知识蒸馏,并在Imagenet上取得了可比的结果。
Knowledge distillation, which involves extracting the "dark knowledge" from a teacher network to guide the learning of a student network, has emerged as an essential technique for model compression and transfer learning. Unlike previous works that focus on the accuracy of student network, here we study a little-explored but important question, i.e., knowledge distillation efficiency. Our goal is to achieve a performance comparable to conventional knowledge distillation with a lower computation cost during training. We show that the UNcertainty-aware mIXup (UNIX) can serve as a clean yet effective solution. The uncertainty sampling strategy is used to evaluate the informativeness of each training sample. Adaptive mixup is applied to uncertain samples to compact knowledge. We further show that the redundancy of conventional knowledge distillation lies in the excessive learning of easy samples. By combining uncertainty and mixup, our approach reduces the redundancy and makes better use of each query to the teacher network. We validate our approach on CIFAR100 and ImageNet. Notably, with only 79% computation cost, we outperform conventional knowledge distillation on CIFAR100 and achieve a comparable result on ImageNet.