论文标题
深线性神经网络的统计力学:后传播内核重归如此
Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization
论文作者
论文摘要
在许多现实世界中,深度学习的成功引发了一项激烈的努力,以了解迄今为止进步有限的复杂任务训练和概括的深度学习的力量和局限性。在这项工作中,我们研究了深层线性神经网络(DLNN)中学习的统计力学,其中单个单元的输入输出函数是线性的。尽管单元具有线性性,但在DLNN中学习是非线性的,因此研究其特性揭示了非线性深神经网络(DNNS)的某些特征。重要的是,我们使用体重空间中的平衡吉布斯分布来准确地解决了监督学习后的网络属性。为此,我们介绍了后传播内核重归其化(BPKR),该核重归其化(BPKR)允许从网络输出层开始并向后进步,直到将第一层的权重整合到输出为止。此过程使我们能够评估重要的网络属性,例如其概括误差,网络宽度和深度的作用,训练集大小的影响以及体重正则化和学习随机性的影响。 BPKR不假定输入或任务输出的特定统计信息。此外,通过对层进行部分整合,BPKR使我们能够在不同的隐藏层上计算神经表示的属性。我们已经提出了将BPKR扩展到具有Relu的非线性DNN。令人惊讶的是,我们的数值模拟表明,尽管有非线性,但我们的理论的预测在广泛的参数方面在很大程度上由Relu Network在很大程度上共享。我们的工作是DNN家族中学习的首个精确的统计机械研究,也是通过在学习重量空间中连续整合DOF的第一个成功学习理论。
The success of deep learning in many real-world tasks has triggered an intense effort to understand the power and limitations of deep learning in the training and generalization of complex tasks, so far with limited progress. In this work, we study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is nonlinear, hence studying its properties reveals some of the features of nonlinear Deep Neural Networks (DNNs). Importantly, we solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the Back-Propagating Kernel Renormalization (BPKR), which allows for the incremental integration of the network weights starting from the network output layer and progressing backward until the first layer's weights are integrated out. This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. BPKR does not assume specific statistics of the input or the task's output. Furthermore, by performing partial integration of the layers, the BPKR allows us to compute the properties of the neural representations across the different hidden layers. We have proposed an extension of the BPKR to nonlinear DNNs with ReLU. Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of DNNs, and the first successful theory of learning through successive integration of DoFs in the learned weight space.