论文标题

RényiDivergence深度共同学习

Rényi Divergence Deep Mutual Learning

论文作者

Huang, Weipeng, Tao, Junjie, Deng, Changbo, Fan, Ming, Wan, Wenqiang, Xiong, Qi, Piao, Guangyuan

论文摘要

本文重新审视了深度共同学习(DML),这是一个简单而有效的计算范式。我们建议使用RényiDivergence,而不是更灵活和可调的KL Divergence,以改善香草DML。这种修改能够在有限的额外复杂性的情况下,能够始终如一地提高Vanilla DML的性能。理论上分析了所提出的范式的收敛性能,并显示出恒定学习率的随机梯度下降显示可与$ \ MATHCAL {O}(1)$ - 在最坏情况下,对于非convex优化任务的情况下。也就是说,学习将到达附近的本地Optima,但继续在有限的范围内进行搜索,这可能有助于减轻过度拟合。最后,我们广泛的经验结果证明了将DML和Rényi发散相结合的优势,从而进一步改善了模型概括。

This paper revisits Deep Mutual Learning (DML), a simple yet effective computing paradigm. We propose using Rényi divergence instead of the KL divergence, which is more flexible and tunable, to improve vanilla DML. This modification is able to consistently improve performance over vanilla DML with limited additional complexity. The convergence properties of the proposed paradigm are analyzed theoretically, and Stochastic Gradient Descent with a constant learning rate is shown to converge with $\mathcal{O}(1)$-bias in the worst case scenario for nonconvex optimization tasks. That is, learning will reach nearby local optima but continue searching within a bounded scope, which may help mitigate overfitting. Finally, our extensive empirical results demonstrate the advantage of combining DML and Rényi divergence, leading to further improvement in model generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源