论文标题

良性,纠正或灾难性:过度拟合的分类法

Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting

论文作者

Mallinar, Neil, Simon, James B., Abedsoltan, Amirhesam, Pandit, Parthe, Belkin, Mikhail, Nakkiran, Preetum

论文摘要

过度参数化的神经网络的实际成功促进了最近对插值方法的科学研究,该研究非常适合其训练数据。如果没有灾难性的测试表现,包括神经网络在内的某些插值方法(包括神经网络)可以适合嘈杂的训练数据,这是违反统计学习理论的标准直觉的。为了解释这一点,最近的一系列工作已经研究了良性过度拟合,这是一种现象,即即使在存在噪声的情况下,一些插值方法也可以接近贝叶斯的最佳性。在这项工作中,我们认为,尽管良性过度拟合的良好性既有富有成效又富有成果,但许多真正的插值方法(例如神经网络)并不适合良性:训练集中的适度噪音会导致非零(但非限制)在测试时间中多余的风险,这意味着这些模型既不是良性的,又不是灾难性的,而是属于中等程度的制度。我们称此中级政权纠正过度拟合,并启动其系统研究。我们首先在内核(Ridge)回归(KR)的背景下探索了这种现象,通过在脊参数和内核特征光谱上获得kr表现出三种行为的每一种。我们发现,具有PowerLaw光谱的内核,包括Laplace内核和Relu神经切线内核,表现出了过度拟合的。然后,我们通过分类法的镜头从经验上研究深度神经网络,并发现接受插值训练的人是脾气暴躁的,而那些训练的人则是良性的。我们希望我们的工作能够使人们对现代学习过度拟合的更加理解。

The practical success of overparameterized neural networks has motivated the recent scientific study of interpolating methods, which perfectly fit their training data. Certain interpolating methods, including neural networks, can fit noisy training data without catastrophically bad test performance, in defiance of standard intuitions from statistical learning theory. Aiming to explain this, a body of recent work has studied benign overfitting, a phenomenon where some interpolating methods approach Bayes optimality, even in the presence of noise. In this work we argue that while benign overfitting has been instructive and fruitful to study, many real interpolating methods like neural networks do not fit benignly: modest noise in the training set causes nonzero (but non-infinite) excess risk at test time, implying these models are neither benign nor catastrophic but rather fall in an intermediate regime. We call this intermediate regime tempered overfitting, and we initiate its systematic study. We first explore this phenomenon in the context of kernel (ridge) regression (KR) by obtaining conditions on the ridge parameter and kernel eigenspectrum under which KR exhibits each of the three behaviors. We find that kernels with powerlaw spectra, including Laplace kernels and ReLU neural tangent kernels, exhibit tempered overfitting. We then empirically study deep neural networks through the lens of our taxonomy, and find that those trained to interpolation are tempered, while those stopped early are benign. We hope our work leads to a more refined understanding of overfitting in modern learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源