论文标题

二进制分类中对抗训练的渐近行为

Asymptotic Behavior of Adversarial Training in Binary Classification

论文作者

Taheri, Hossein, Pedarsani, Ramtin, Thrampoulidis, Christos

论文摘要

始终报道说,许多机器学习模型容易受到对抗性攻击的影响,即应用于数据点的小型添加剂扰动可能会导致错误分类。使用经验风险最小化的对抗训练被认为是防御对抗攻击的最新方法。尽管在实践中取得了成功,但了解对抗训练的概括性能方面的几个问题仍然开放。在本文中,我们为二进制分类中对抗性训练的表现提供了精确的理论预测。我们考虑了高维度,其中数据的尺寸随训练数据集的大小以恒定比率增长。我们的结果为通过$ \ ell_q $ -Norm界面扰动($ Q \ ge 1 $)获得的估计量的标准和对抗测试错误提供了确切的渐近学,并为歧视性二进制模型和具有相关功能的生成高斯混合模型而言。此外,我们使用这些尖锐的预测来揭示有关各种参数的作用的几个有趣的观察,包括过度参数比率,数据模型以及对对抗性和标准误差的攻击预算。

It has been consistently reported that many machine learning models are susceptible to adversarial attacks i.e., small additive adversarial perturbations applied to data points can cause misclassification. Adversarial training using empirical risk minimization is considered to be the state-of-the-art method for defense against adversarial attacks. Despite being successful in practice, several problems in understanding generalization performance of adversarial training remain open. In this paper, we derive precise theoretical predictions for the performance of adversarial training in binary classification. We consider the high-dimensional regime where the dimension of data grows with the size of the training data-set at a constant ratio. Our results provide exact asymptotics for standard and adversarial test errors of the estimators obtained by adversarial training with $\ell_q$-norm bounded perturbations ($q \ge 1$) for both discriminative binary models and generative Gaussian-mixture models with correlated features. Furthermore, we use these sharp predictions to uncover several intriguing observations on the role of various parameters including the over-parameterization ratio, the data model, and the attack budget on the adversarial and standard errors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源