论文标题
主张针对对抗性例子的多种防御策略
Advocating for Multiple Defense Strategies against Adversarial Examples
论文作者
论文摘要
据经验观察到,旨在保护神经网络免受$ \ ell_ \ infty $ verversarial示例的防御机制提供了差的性能,而$ \ ell_2 $对抗性示例,反之亦然。在本文中,我们进行了几何分析,以验证这一观察结果。然后,我们提供许多经验见解,以说明这种现象在实践中的影响。然后,我们回顾了一些现有的防御机制,这些机制试图通过混合防御策略来防御多次攻击。多亏了我们的数值实验,我们讨论了这种方法和状态开放问题的相关性,对对抗性示例社区。
It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.