论文标题

在3D点云中无法察觉,强大的后门攻击

Imperceptible and Robust Backdoor Attack in 3D Point Cloud

论文作者

Gao, Kuofeng, Bai, Jiawang, Wu, Baoyuan, Ya, Mengxi, Xia, Shu-Tao

论文摘要

随着处理点云数据中深度学习的繁荣,最近的作品表明,后门攻击对3D Vision应用构成了严重的安全威胁。攻击者通过用触发器中毒一些训练样本将后门注射到3D模型中,以使后式模型在干净的样本上表现良好,但在出现扳机模式时会恶意行为。现有的攻击通常将一些其他点插入点云中,或者在触发器中使用线性转换(例如旋转)来构建中毒点云。但是,这些中毒样品的影响可能会被一些常用的3D点云的常用预处理技术削弱甚至消除,例如,离群值的去除或旋转增加。在本文中,我们提出了一种新颖的可察觉和强大的后门攻击(IRBA)来应对这一挑战。我们利用一种称为加权局部变换(WLT)的非线性和局部变换来构建具有独特转换的中毒样品。由于WLT中有几种超参数和随机性,因此很难产生两个类似的转换。因此,具有独特转化的中毒样品可能对上述预处理技术有抵抗力。此外,由于固定的WLT引起的失真的可控性和平滑度,因此人类检查也无法察觉到产生的中毒样品。在三个基准数据集和四个模型上进行的广泛实验表明,即使使用预处理技术,IRBA在大多数情况下都可以达到80%+ ASR,这显着高于以前的最新攻击。

With the thriving of deep learning in processing point cloud data, recent works show that backdoor attacks pose a severe security threat to 3D vision applications. The attacker injects the backdoor into the 3D model by poisoning a few training samples with trigger, such that the backdoored model performs well on clean samples but behaves maliciously when the trigger pattern appears. Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e.g., rotation) to construct the poisoned point cloud. However, the effects of these poisoned samples are likely to be weakened or even eliminated by some commonly used pre-processing techniques for 3D point cloud, e.g., outlier removal or rotation augmentation. In this paper, we propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge. We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations. As there are several hyper-parameters and randomness in WLT, it is difficult to produce two similar transformations. Consequently, poisoned samples with unique transformations are likely to be resistant to aforementioned pre-processing techniques. Besides, as the controllability and smoothness of the distortion caused by a fixed WLT, the generated poisoned samples are also imperceptible to human inspection. Extensive experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques, which is significantly higher than previous state-of-the-art attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源