论文标题

关于不同学习制度中两层神经网络的(非)鲁棒性

On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

论文作者

Dohmatob, Elvis, Bietti, Alberto

论文摘要

神经网络对对抗性例子高度敏感。这些可能是由于不同的因素,例如随机初始化或学习问题中的虚假相关性。为了更好地理解这些因素,我们提供了对不同情况下的对抗性鲁棒性的精确研究,从初始化到不同制度的训练以及中间场景的培训结束,初始化仍然由于“懒惰”培训而起作用。我们考虑具有二次靶标和无限样本的高维度的过度参数化网络。我们的分析使我们能够确定近似(通过测试错误测量)和鲁棒性之间的新权衡,从而在测试误差改善时只能变得更糟,反之亦然。我们还展示了由于不当缩放的随机初始化,线性化的懒惰训练机制如何使鲁棒性恶化。通过数值实验说明了我们的理论结果。

Neural networks are known to be highly sensitive to adversarial examples. These may arise due to different factors, such as random initialization, or spurious correlations in the learning problem. To better understand these factors, we provide a precise study of the adversarial robustness in different scenarios, from initialization to the end of training in different regimes, as well as intermediate scenarios, where initialization still plays a role due to "lazy" training. We consider over-parameterized networks in high dimensions with quadratic targets and infinite samples. Our analysis allows us to identify new tradeoffs between approximation (as measured via test error) and robustness, whereby robustness can only get worse when test error improves, and vice versa. We also show how linearized lazy training regimes can worsen robustness, due to improperly scaled random initialization. Our theoretical results are illustrated with numerical experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源