论文标题

通过向后平滑有效的强大训练

Efficient Robust Training via Backward Smoothing

论文作者

Chen, Jinghui, Cheng, Yu, Gan, Zhe, Gu, Quanquan, Liu, Jingjing

论文摘要

迄今为止,对抗训练是防御对抗例子的最有效策略。但是,由于每个训练步骤中的迭代对抗性攻击,它的计算成本高。最近的研究表明,可以通过随机初始化进行单步攻击来实现快速的对抗训练。但是,这种方法仍然落后于最先进的对抗训练算法,既有稳定性和模型稳健性。在这项工作中,我们通过将随机初始化视为执行随机平滑,以更好地优化内部最大化问题,从而对快速对抗训练产生新的理解。遵循这种新的观点,我们还提出了一种新的初始化策略,即向后平滑,以进一步提高单步稳健训练方法的稳定性和模型稳健性。多个基准测试的实验表明,我们的方法可以实现与原始交易方法相似的模型鲁棒性,同时使用较少的训练时间($ \ sim $ 3倍改善,并使用相同的培训时间表提高)。

Adversarial training is so far the most effective strategy in defending against adversarial examples. However, it suffers from high computational costs due to the iterative adversarial attacks in each training step. Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack with random initialization. However, such an approach still lags behind state-of-the-art adversarial training algorithms on both stability and model robustness. In this work, we develop a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better optimization of the inner maximization problem. Following this new perspective, we also propose a new initialization strategy, backward smoothing, to further improve the stability and model robustness over single-step robust training methods. Experiments on multiple benchmarks demonstrate that our method achieves similar model robustness as the original TRADES method while using much less training time ($\sim$3x improvement with the same training schedule).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源