论文标题

深度传感器融合模型的对抗性鲁棒性

Adversarial Robustness of Deep Sensor Fusion Models

论文作者

Wang, Shaojie, Wu, Tong, Chakrabarti, Ayan, Vorobeychik, Yevgeniy

论文摘要

我们通过实验研究深度摄像头融合体系结构的鲁棒性,以在自动驾驶中进行2D对象检测。首先,我们发现融合模型通常更准确,并且比单传感器深神经网络更适合单源攻击。此外,我们表明,如果没有对抗性训练,早期融合比晚期融合更健壮,而两者在对抗训练后的表现类似。但是,我们注意到,对深融合的单渠道对抗训练甚至对稳健​​性也很有害。此外,我们观察到跨渠道的外部性,单渠道对抗训练降低了对另一道渠道的攻击性的稳健性。此外,我们观察到,对抗性训练中对抗模型的选择至关重要:使用限制在汽车边界框的攻击在对抗训练中更有效,并且表现出较少的跨渠道外部性。最后,我们发现联合通道对抗训练有助于减轻上述许多问题,但并不能显着提高对抗性的鲁棒性。

We experimentally study the robustness of deep camera-LiDAR fusion architectures for 2D object detection in autonomous driving. First, we find that the fusion model is usually both more accurate, and more robust against single-source attacks than single-sensor deep neural networks. Furthermore, we show that without adversarial training, early fusion is more robust than late fusion, whereas the two perform similarly after adversarial training. However, we note that single-channel adversarial training of deep fusion is often detrimental even to robustness. Moreover, we observe cross-channel externalities, where single-channel adversarial training reduces robustness to attacks on the other channel. Additionally, we observe that the choice of adversarial model in adversarial training is critical: using attacks restricted to cars' bounding boxes is more effective in adversarial training and exhibits less significant cross-channel externalities. Finally, we find that joint-channel adversarial training helps mitigate many of the issues above, but does not significantly boost adversarial robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源