论文标题
奇异的价值扰动和深网的优化
Singular Value Perturbation and Deep Network Optimization
论文作者
论文摘要
我们在矩阵扰动上开发了新的理论结果,以阐明体系结构对深网性能的影响。特别是,我们在分析上解释了深度学习者长期以来一直在经验上观察到了什么:某些深度体系结构(例如,残留网络,重新网络和密集网络,密集网络)的参数比其他建筑更易于优化(例如,卷积网络,Convnets,Convnets)。在我们较早的工作中,我们将深层网络与连续的分段婚纱花键连接起来,为一个现代深网的家族开发了深层网络层的确切局部线性表示,其中包括频谱的一端和重新连接,登机网络以及其他具有跳过连接的网络。对于优化平方误差损失的回归和分类任务,我们表明现代深网的优化损耗表面在参数中是分段二次的,其局部形状由矩阵的单数值,这是局部线性表示的函数。我们开发了新的扰动结果,以实现这种矩阵的奇异值在添加一部分身份并乘以某些对角线矩阵时的表现。我们的扰动结果的直接应用在分析上说明了为什么具有跳过连接的网络(例如Resnet或densenet)比Convnet更容易优化:由于其更稳定的奇异值和较小的状态数字,这种网络的局部损失表面的局部损失表面较小,不那么不稳定,偏心较小,偏心率更低,并且具有更小的最小值,而最小值的最小值则更加可容纳到毕业的优化。我们的结果还为不同的非线性激活函数对深网的奇异值的影响提供了新的启示。
We develop new theoretical results on matrix perturbation to shed light on the impact of architecture on the performance of a deep network. In particular, we explain analytically what deep learning practitioners have long observed empirically: the parameters of some deep architectures (e.g., residual networks, ResNets, and Dense networks, DenseNets) are easier to optimize than others (e.g., convolutional networks, ConvNets). Building on our earlier work connecting deep networks with continuous piecewise-affine splines, we develop an exact local linear representation of a deep network layer for a family of modern deep networks that includes ConvNets at one end of a spectrum and ResNets, DenseNets, and other networks with skip connections at the other. For regression and classification tasks that optimize the squared-error loss, we show that the optimization loss surface of a modern deep network is piecewise quadratic in the parameters, with local shape governed by the singular values of a matrix that is a function of the local linear representation. We develop new perturbation results for how the singular values of matrices of this sort behave as we add a fraction of the identity and multiply by certain diagonal matrices. A direct application of our perturbation results explains analytically why a network with skip connections (such as a ResNet or DenseNet) is easier to optimize than a ConvNet: thanks to its more stable singular values and smaller condition number, the local loss surface of such a network is less erratic, less eccentric, and features local minima that are more accommodating to gradient-based optimization. Our results also shed new light on the impact of different nonlinear activation functions on a deep network's singular values, regardless of its architecture.