论文标题
线性回归中的对抗性稳定的估计和风险分析
Adversarially Robust Estimate and Risk Analysis in Linear Regression
论文作者
论文摘要
对抗性鲁棒的学习旨在设计对输入变量的小型对抗扰动的算法。除了对对抗样本的预测性能进行的现有研究之外,我们的目标是了解对抗性稳健估计的统计特性,并在线性回归模型的设置中分析对抗风险。通过发现对抗稳健估计器的统计最小收敛速率,我们强调了将模型信息(例如稀疏性)纳入对抗性稳健学习的重要性。此外,我们揭示了对抗性和标准估计的明确联系,并提出了一个直接的两阶段对抗学习框架,该框架有助于利用模型结构信息来提高对抗性的鲁棒性。从理论上讲,证明了对抗性稳健的估计量的一致性,并且其巴哈杜尔表示也是为统计推断目的而开发的。在低维或稀疏方案下,提出的估计器以急剧的速度收敛。此外,我们的理论证实了对抗性稳健学习中的两个现象:对抗性鲁棒性损害了概括,未标记的数据有助于改善概括。最后,我们进行数值模拟以验证我们的理论。
Adversarially robust learning aims to design algorithms that are robust to small adversarial perturbations on input variables. Beyond the existing studies on the predictive performance to adversarial samples, our goal is to understand statistical properties of adversarially robust estimates and analyze adversarial risk in the setup of linear regression models. By discovering the statistical minimax rate of convergence of adversarially robust estimators, we emphasize the importance of incorporating model information, e.g., sparsity, in adversarially robust learning. Further, we reveal an explicit connection of adversarial and standard estimates, and propose a straightforward two-stage adversarial learning framework, which facilitates to utilize model structure information to improve adversarial robustness. In theory, the consistency of the adversarially robust estimator is proven and its Bahadur representation is also developed for the statistical inference purpose. The proposed estimator converges in a sharp rate under either low-dimensional or sparse scenario. Moreover, our theory confirms two phenomena in adversarially robust learning: adversarial robustness hurts generalization, and unlabeled data help improve the generalization. In the end, we conduct numerical simulations to verify our theory.