论文标题

3D点云上的几何对抗攻击和防御

Geometric Adversarial Attacks and Defenses on 3D Point Clouds

论文作者

Lang, Itai, Kotlicki, Uriel, Avidan, Shai

论文摘要

深度神经网络容易出现恶性​​改变网络结果的对抗性例子。由于3D传感器在安全 - 关键系统中的普及以及3D点集的深度学习模型的大量部署,因此对这种模型的对抗攻击和防御措施越来越感兴趣。到目前为止,该研究集中在语义水平上,即深点云分类器。但是,点云也以几何相关形式被广泛使用,其中包括编码和重建几何形状。在这项工作中,我们是第一个在几何水平上考虑对抗示例的问题的人。在这种情况下,问题是如何对清洁的源点云进行小型更改,该更改通过自动编码器模型后,导致了不同目标形状的重建。我们的攻击与3D点云的现有语义攻击形成鲜明对比。尽管这些作品旨在通过分类器修改预测标签,但我们改变了整个重建的几何形状。此外,我们证明了在防御的情况下,我们证明了攻击的鲁棒性,在此,我们表明,将防御应用于对抗性输入后,目标形状的残余特征仍然存在于输出中。我们的代码可在https://github.com/itailang/geometric_adv上公开获取。

Deep neural networks are prone to adversarial examples that maliciously alter the network's outcome. Due to the increasing popularity of 3D sensors in safety-critical systems and the vast deployment of deep learning models for 3D point sets, there is a growing interest in adversarial attacks and defenses for such models. So far, the research has focused on the semantic level, namely, deep point cloud classifiers. However, point clouds are also widely used in a geometric-related form that includes encoding and reconstructing the geometry. In this work, we are the first to consider the problem of adversarial examples at a geometric level. In this setting, the question is how to craft a small change to a clean source point cloud that leads, after passing through an autoencoder model, to the reconstruction of a different target shape. Our attack is in sharp contrast to existing semantic attacks on 3D point clouds. While such works aim to modify the predicted label by a classifier, we alter the entire reconstructed geometry. Additionally, we demonstrate the robustness of our attack in the case of defense, where we show that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input. Our code is publicly available at https://github.com/itailang/geometric_adv.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源