论文标题
学习自然重量的神经网络的硬度
Hardness of Learning Neural Networks with Natural Weights
论文作者
论文摘要
尽管结果很强,但如今,神经网络仍然非常成功。现有的硬度结果集中在网络体系结构上,并假定网络的权重是任意的。解决差异的一种自然方法是假设网络的权重“行为良好”,并具有一些可能允许有效学习的通用特性。这种方法得到了直觉的支持,即现实世界网络中的权重不是任意的,但是在某些“自然”分布方面表现出了一些“随机”属性。我们在这方面证明了负面的结果,并表明,对于深度 - $ 2 $网络,许多“自然”的权重分布,例如正常和统一分布,大多数网络都很难学习。也就是说,没有有效的学习算法在大多数权重和每个输入分布中都取得了成功。这意味着在此类随机网络中没有概率很高并允许有效学习。
Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary. A natural approach to settle the discrepancy is to assume that the network's weights are "well-behaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in real-world networks are not arbitrary, but exhibit some "random-like" properties with respect to some "natural" distributions. We prove negative results in this regard, and show that for depth-$2$ networks, and many "natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.