论文标题
随机面具:朝着强大的卷积神经网络
RANDOM MASK: Towards Robust Convolutional Neural Networks
论文作者
论文摘要
最近,神经网络的鲁棒性是由对抗性示例(即添加了具有良好设计的扰动的输入,这些输入,这些扰动是人类无法察觉的,但可能会导致网络给出错误的输出。在本文中,我们设计了一种新的CNN体系结构,其本身具有良好的稳健性。我们引入了一种简单但功能强大的技术,随机掩码,以修改现有的CNN结构。我们表明,带有随机面具的CNN可以在不进行任何对抗训练的情况下针对黑盒对抗攻击实现最先进的表现。接下来,我们研究了用随机面具“愚弄” CNN的对抗性例子。令人惊讶的是,我们发现这些对抗性的例子通常也“愚弄”人类。这就提出了有关如何正确定义对抗性例子和鲁棒性的基本问题。
Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. In this paper, we design a new CNN architecture that by itself has good robustness. We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures. We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training. We next investigate the adversarial examples which 'fool' a CNN with Random Mask. Surprisingly, we find that these adversarial examples often 'fool' humans as well. This raises fundamental questions on how to define adversarial examples and robustness properly.