论文标题
信息瓶颈高维回归理论:相关性,效率和最佳性
Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
论文作者
论文摘要
避免过度拟合是机器学习的核心挑战,但是许多大型神经网络很容易实现零训练损失。这种令人困惑的矛盾需要对过度拟合的新方法进行新的方法。在这里,我们通过残差信息量化过度拟合,该信息定义为在训练数据中编码噪声的拟合模型中的位。信息有效的学习算法最大程度地减少了剩余信息,同时最大程度地提高了相关位,这可以预测未知的生成模型。我们解决了此优化,以获得线性回归问题的最佳算法的信息内容,并将其与随机脊回归的信息进行比较。我们的结果表明,残留信息和相关信息之间的基本权衡,并表征了随机回归相对于最佳算法的相对信息效率。最后,利用随机矩阵理论的结果,我们揭示了在高维度上学习线性图的信息复杂性,并揭示了双重和多重下降现象的信息理论类似物。
Avoiding overfitting is a central challenge in machine learning, yet many large neural networks readily achieve zero training loss. This puzzling contradiction necessitates new approaches to the study of overfitting. Here we quantify overfitting via residual information, defined as the bits in fitted models that encode noise in training data. Information efficient learning algorithms minimize residual information while maximizing the relevant bits, which are predictive of the unknown generative models. We solve this optimization to obtain the information content of optimal algorithms for a linear regression problem and compare it to that of randomized ridge regression. Our results demonstrate the fundamental trade-off between residual and relevant information and characterize the relative information efficiency of randomized regression with respect to optimal algorithms. Finally, using results from random matrix theory, we reveal the information complexity of learning a linear map in high dimensions and unveil information-theoretic analogs of double and multiple descent phenomena.