论文标题

关于通过模型梯度相似性对神经网络的正则化的解释性

On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity

论文作者

Szolnoky, Vincent, Andersson, Viktor, Kulcsar, Balazs, Jörnsten, Rebecka

论文摘要

大多数复杂的机器学习和建模技术容易过度拟合,随后可能会概括为未来的数据。在这方面,人工神经网络没有什么不同,尽管接受梯度下降训练时具有一定程度的隐式正规化水平,但通常需要有助于显式正规机。我们介绍了一个新的框架,模型梯度相似性(MGS),该框架是正规化的度量标准,可用于监视神经网络培训,(2)增加了对明确的正规行为的明确介绍,而源自广泛不同的原理,而从同一机制则具有较高的良好级别的基础,(3)的基础,(3)的良好级别,(3)的良好级别,(3)的良好级别,(3)的良好级别,(3)的良好级别,(3)的良好级别,(3)良好的效果,(3)良好的效果,(3)的良好范围,(3)良好的效果,那么(3)均具有良好的范围。标签噪声或有限的样本量。

Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in this regard and, despite having a level of implicit regularisation when trained with gradient descent, often require the aid of explicit regularisers. We introduce a new framework, Model Gradient Similarity (MGS), that (1) serves as a metric of regularisation, which can be used to monitor neural network training, (2) adds insight into how explicit regularisers, while derived from widely different principles, operate via the same mechanism underneath by increasing MGS, and (3) provides the basis for a new regularisation scheme which exhibits excellent performance, especially in challenging settings such as high levels of label noise or limited sample sizes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源