论文标题

对抗性伪装:隐藏自然风格的物理世界攻击

Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles

论文作者

Duan, Ranjie, Ma, Xingjun, Wang, Yisen, Bailey, James, Qin, A. K., Yang, Yun

论文摘要

已知深层神经网络(DNN)容易受到对抗性例子的影响。现有的作品主要集中在通过小型且不可察觉的扰动中创建的数字对抗示例,或者是通过人类观察者轻松识别的大型且较不现实的扭曲而创建的物理世界对抗性示例。在本文中,我们提出了一种新颖的方法,称为对抗性伪装(\ emph {advcam}),将其制作和伪装物理世界对抗性典范成对人类观察者来说似乎是合理的天然风格。具体而言,\ emph {advcam}将大型对抗性扰动转移到定制样式中,然后将其“隐藏”在目标对象或靶向背景中。实验评估表明,在数字和物理世界的情况下,由\ emph {advcam}精心设计的对抗性例子都是伪装和高度隐形的,同时在欺骗了最先进的DNN图像分类器方面保持有效。因此,\ emph {advcam}是一种灵活的方法,可以帮助制作隐形攻击以评估DNN的鲁棒性。 \ emph {advcam}也可以用来保护私人信息免受深度学习系统检测。

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (\emph{AdvCam}), to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers. Specifically, \emph{AdvCam} transfers large adversarial perturbations into customized styles, which are then "hidden" on-target object or off-target background. Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by \emph{AdvCam} are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers. Hence, \emph{AdvCam} is a flexible approach that can help craft stealthy attacks to evaluate the robustness of DNNs. \emph{AdvCam} can also be used to protect private information from being detected by deep learning systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源