论文标题
大深度网络的隐性偏见:非线性函数的等级概念
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
论文作者
论文摘要
我们表明,具有同质非线性的完全连接神经网络的表示成本 - 描述了具有$ l_2 $ regolarization网络的函数空间的隐式偏差,或者跨透明膜等损失会收敛,因为该网络的深度是无限范围的无线函数等级的信息。然后,我们询问在哪个条件下,损失的全球最小值恢复了数据的“真实”等级:我们表明,对于太大的深度,全球最低最低最低将大约排名1(低估了排名);然后,我们认为有一系列深度随恢复真实等级的数据点数而增长。最后,我们讨论了分类器等级对所得类边界拓扑的效果,并表明具有最佳非线性等级的自动编码器自然是降级的。
We show that the representation cost of fully connected neural networks with homogeneous nonlinearities - which describes the implicit bias in function space of networks with $L_2$-regularization or with losses such as the cross-entropy - converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the `true' rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising.