论文标题

优化几何学在单神经元学习中的作用

The role of optimization geometry in single neuron learning

论文作者

Boffi, Nicholas M., Tu, Stephen, Slotine, Jean-Jacques E.

论文摘要

最近的数值实验表明,在学习表达性非线性模型类(例如深神经网络)时,在训练过程中使用的优化几何形状的选择会影响概括性能。这些观察结果对现代深度学习具有重要意义,但由于相关的非凸优化问题的困难,人们仍然对其进行了良好的理解。为了理解这种现象,我们分析了在正方形损失下学习通用线性模型的伪病变方法 - 一个简化的问题,其中包含模型参数中的非线性和优化的非凸性,该问题允许单个神经元作为一种特殊情况。我们证明了对概括误差的非反应界限,这些误差彻底表征了优化几何形状与特征空间几何形状之间的相互作用如何设置了学习模型的样本外部性能。在实验上,选择我们理论所建议的优化几何形状会导致在广义线性模型估计问题(例如稀疏向量恢复和低级别矩阵传感的非线性和非凸变体)中提高性能。

Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning but remain poorly understood due to the difficulty of the associated nonconvex optimization problem. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss - a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源