论文标题

无形的扰动:利用滚动快门效果的物理对抗示例

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

论文作者

Sayles, Athena, Hooda, Ashish, Gupta, Mohit, Chatterjee, Rahul, Fernandes, Earlence

论文摘要

到目前为止,通过可见的伪像,可以实现基于相机的计算机视觉的物理对抗示例 - 停车标志上的贴纸,眼镜周围的五颜六色边界或带有颜色纹理的3D打印对象。这里的一个隐含假设是必须可见扰动,以便相机可以感觉到它们。相比之下,我们为人眼看不见的物理对抗例子贡献了一种程序。我们没有通过可见的伪影修改受害物体,而是修改照明对象的光。我们演示了攻击者如何制作一个调制的光信号,以对抗性地照亮场景并在最先进的Imagenet深度学习模型上引起针对性的错误分类。具体而言,我们利用商品摄像机中的辐射滚动快门效果来创建图像上出现的精确条纹模式。在人眼中,它看起来像对象被照亮,但是相机创建带有条纹的图像,该图像会导致ML模型输出攻击者呈现的分类。我们对LED进行了一系列模拟和物理实验,表明靶向攻击率高达84%。

Physical adversarial examples for camera-based computer vision have so far been achieved through visible artifacts -- a sticker on a Stop sign, colorful borders around eyeglasses or a 3D printed object with a colorful texture. An implicit assumption here is that the perturbations must be visible so that a camera can sense them. By contrast, we contribute a procedure to generate, for the first time, physical adversarial examples that are invisible to human eyes. Rather than modifying the victim object with visible artifacts, we modify light that illuminates the object. We demonstrate how an attacker can craft a modulated light signal that adversarially illuminates a scene and causes targeted misclassifications on a state-of-the-art ImageNet deep learning model. Concretely, we exploit the radiometric rolling shutter effect in commodity cameras to create precise striping patterns that appear on images. To human eyes, it appears like the object is illuminated, but the camera creates an image with stripes that will cause ML models to output the attacker-desired classification. We conduct a range of simulation and physical experiments with LEDs, demonstrating targeted attack rates up to 84%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源