论文标题
欺骗了自动驾驶汽车的眼睛:针对交通标志识别系统的强大身体对抗例子
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems
论文作者
论文摘要
对抗性例子(AES)可以欺骗深层神经网络(DNN),并且最近受到了很多关注。但是,关于AE的大多数研究都在数字域中,对抗斑块是静态的,这与自动驾驶汽车中的许多现实世界DNN应用程序(例如交通符号识别(TSR)系统)有很大不同。在TSR系统中,对象检测器使用DNN实时处理流视频。从对象探测器的角度来看,视频的流量标志位置和质量正在不断变化,使数字AE在物理世界中无效。 在本文中,我们提出了一条系统的管道,以生成针对现实世界对象探测器的强大物理AE。鲁棒性以三种方式实现。首先,我们通过使用模糊转换和分辨率转换扩展图像转换的分布来模拟车载摄像机。其次,我们设计了单个和多个边界盒过滤器,以提高扰动训练的效率。第三,我们考虑四个代表性攻击向量,即隐藏攻击,外观攻击,非目标攻击和目标攻击。 我们在各种环境条件下进行了一系列全面的实验,并考虑在阳光明媚和多云的天气以及晚上进行照明。实验结果表明,在攻击基于YOLO V5的TSR系统时,我们管道中产生的物理AE具有有效且健壮。这些攻击具有良好的可传递性,可以欺骗其他最先进的对象探测器。我们在全新的2021型车辆上推出了HA和NTA。两种攻击都成功地欺骗了TSR系统,这可能是自动驾驶汽车的威胁生命的案例。最后,我们讨论了基于图像预处理,AES检测和模型增强的三种防御机制。
Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and have received a lot of attention recently. However, majority of the research on AEs is in the digital domain and the adversarial patches are static, which is very different from many real-world DNN applications such as Traffic Sign Recognition (TSR) systems in autonomous vehicles. In TSR systems, object detectors use DNNs to process streaming video in real time. From the view of object detectors, the traffic sign`s position and quality of the video are continuously changing, rendering the digital AEs ineffective in the physical world. In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors. Robustness is achieved in three ways. First, we simulate the in-vehicle cameras by extending the distribution of image transformations with the blur transformation and the resolution transformation. Second, we design the single and multiple bounding boxes filters to improve the efficiency of the perturbation training. Third, we consider four representative attack vectors, namely Hiding Attack, Appearance Attack, Non-Target Attack and Target Attack. We perform a comprehensive set of experiments under a variety of environmental conditions, and considering illuminations in sunny and cloudy weather as well as at night. The experimental results show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based TSR system. The attacks have good transferability and can deceive other state-of-the-art object detectors. We launched HA and NTA on a brand-new 2021 model vehicle. Both attacks are successful in fooling the TSR system, which could be a life-threatening case for autonomous vehicles. Finally, we discuss three defense mechanisms based on image preprocessing, AEs detection, and model enhancing.