论文标题

深入学习的合奏,知识蒸馏和自我缩减

Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning

论文作者

Allen-Zhu, Zeyuan, Li, Yuanzhi

论文摘要

我们正式研究深度学习模型的合奏如何可以提高测试准确性,以及如何使用知识蒸馏将合奏的出色性能蒸馏成单个模型。我们考虑了一个具有挑战性的案例,即合奏仅仅是具有相同架构的少数独立训练的神经网络的输出的平均值,这些架构是在相同的数据集上使用相同算法进行训练的,而它们仅因初始化中使用的随机种子而有所不同。 我们表明,深度学习中的合奏/知识蒸馏与传统学习理论(例如增强或NTK,神经切线内核)的作用截然不同。为了正确理解它们,我们开发了一种理论,表明当数据具有我们称为``多视图''的结构时,就可以证明具有独立训练的神经网络的合奏可以提高测试准确性,并且可以通过训练单个模型来匹配单个模型,而不是真实的标签,而不是真实的标签。我们的结果阐明了合奏在深度学习中的工作方式,其方式与传统定理完全不同,以及``黑暗知识''如何隐藏在整体的输出中,并且可以用于蒸馏中。最后,我们证明自我鉴定也可以被视为隐式结合集合和知识蒸馏以提高测试准确性。

We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the SAME architecture, trained using the SAME algorithm on the SAME data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in Deep Learning works very differently from traditional learning theory (such as boosting or NTKs, neural tangent kernels). To properly understand them, we develop a theory showing that when data has a structure we refer to as ``multi-view'', then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model by training a single model to match the output of the ensemble instead of the true label. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the ``dark knowledge'' is hidden in the outputs of the ensemble and can be used in distillation. In the end, we prove that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源