论文标题

对单眼深度估计网络上的对抗斑块攻击

Adversarial Patch Attacks on Monocular Depth Estimation Networks

论文作者

Yamanaka, Koichiro, Matsumoto, Ryutaroh, Takahashi, Keita, Fujii, Toshiaki

论文摘要

由于深卷积神经网络(CNN)具有出色的学习能力,近年来,使用CNN的单眼深度估算取得了巨大的成功。但是,单独的单眼图像的深度估计本质上是一个问题不足的问题,因此,这种方法似乎具有固有的脆弱性。为了揭示这一限制,我们提出了一种对单眼深度估计的对抗斑块攻击的方法。更具体地说,我们生成人造模式(对抗斑块),这些模式可以欺骗目标方法估算放置模式的区域的不正确深度。我们的方法可以通过将印刷图案物理放置在真实场景中来实现在现实世界中。我们还通过可视化中间层的激活水平以及潜在地受到对抗性攻击影响的区域来分析攻击下的单眼深度估计的行为。

Thanks to the excellent learning capability of deep convolutional neural networks (CNN), monocular depth estimation using CNNs has achieved great success in recent years. However, depth estimation from a monocular image alone is essentially an ill-posed problem, and thus, it seems that this approach would have inherent vulnerabilities. To reveal this limitation, we propose a method of adversarial patch attack on monocular depth estimation. More specifically, we generate artificial patterns (adversarial patches) that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed. Our method can be implemented in the real world by physically placing the printed patterns in real scenes. We also analyze the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源