论文标题

迈向对抗性强大的归一化方法

Towards an Adversarially Robust Normalization Approach

论文作者

Awais, Muhammad, Shamshad, Fahad, Bae, Sung-Ho

论文摘要

分批归一化(BatchNorm)有效地提高性能和加速深度神经网络的训练。但是,它也证明是造成对抗性脆弱性的原因,即没有它的网络对对抗性攻击更为强大。在本文中,我们研究了批处理如何引起这种脆弱性,并提出了对对抗性攻击具有鲁棒性的新归一化。我们首先观察到,对抗图像倾向于改变BatchNorm输入的分布,而这种转变使火车时间估计的人口统计数据不准确。我们假设这些不准确的统计数据使具有批处理的模型更容易受到对抗性攻击的影响。我们通过用推理时间批次计算的统计数据代替火车时间估计的统计数据来证明我们的假设。我们发现,如果我们使用这些统计数据,则面包症的对抗性脆弱性消失了。但是,没有估计的批次统计信息,如果没有大量输入,我们将无法在实践中使用batchnorm。为了减轻这种情况,我们提出了鲁棒的归一化(鲁棒); Batchnorm的对抗性强大版本。我们通过实验表明,接受鲁棒训练的模型在对抗环境中的表现更好,同时保留了BatchNorm的所有好处。代码可在\ url {https://github.com/awaisrauf/robustnorm}中找到。

Batch Normalization (BatchNorm) is effective for improving the performance and accelerating the training of deep neural networks. However, it has also shown to be a cause of adversarial vulnerability, i.e., networks without it are more robust to adversarial attacks. In this paper, we investigate how BatchNorm causes this vulnerability and proposed new normalization that is robust to adversarial attacks. We first observe that adversarial images tend to shift the distribution of BatchNorm input, and this shift makes train-time estimated population statistics inaccurate. We hypothesize that these inaccurate statistics make models with BatchNorm more vulnerable to adversarial attacks. We prove our hypothesis by replacing train-time estimated statistics with statistics calculated from the inference-time batch. We found that the adversarial vulnerability of BatchNorm disappears if we use these statistics. However, without estimated batch statistics, we can not use BatchNorm in the practice if large batches of input are not available. To mitigate this, we propose Robust Normalization (RobustNorm); an adversarially robust version of BatchNorm. We experimentally show that models trained with RobustNorm perform better in adversarial settings while retaining all the benefits of BatchNorm. Code is available at \url{https://github.com/awaisrauf/RobustNorm}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源