论文标题

了解单步对抗训练中的灾难性过度拟合

Understanding Catastrophic Overfitting in Single-step Adversarial Training

论文作者

Kim, Hoki, Lee, Woojin, Lee, Jaewook

论文摘要

尽管快速的对手训练既表现出鲁棒性和效率,又观察到“灾难性过度拟合”的问题。这是一种现象,在这种现象中,在单步对抗训练中,针对预计梯度下降(PGD)的稳健精度突然在几个时期后降至0%,而针对快速梯度标志方法(FGSM)的稳健精度增加到100%。在本文中,我们证明了灾难性的过度拟合与单步对抗训练的特征密切相关,该训练仅使用具有最大扰动的对抗性示例,而不是所有在对抗方向上的对抗典范,这会导致决策边界畸变和高度弯曲的损失表面。基于这个观察结果,我们提出了一种简单的方法,该方法不仅可以防止灾难性的过度拟合,而且还超越了这样的信念,即很难通过单步对抗训练来防止多步对抗性攻击。

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed. This is a phenomenon in which, during single-step adversarial training, the robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas the robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源