论文标题
部分可观测时空混沌系统的无模型预测
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis
论文作者
论文摘要
尽管最近在不同的应用程序方案中广泛部署了3D点云分类,但它仍然非常容易受到对抗攻击的影响。面对对抗性攻击,这增加了对3D模型的强大训练的重要性。基于我们对现有对抗性攻击的性能的分析,在输入数据的中和高频组件中发现了更多对抗性扰动。因此,通过抑制训练阶段的高频含量,改善了针对对抗性例子的模型。实验表明,提出的防御方法降低了对PointNet,PointNet ++和DGCNN模型的六次攻击的成功率。特别是,与最先进的方法相比,Drop100攻击的平均分类精度在Drop100攻击中的平均提高3.8%,而Drop200攻击的平均分类精度提高了3.8%。与其他可用方法相比,该方法还提高了原始数据集的模型精度。
Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks. This increases the importance of robust training of 3D models in the face of adversarial attacks. Based on our analysis on the performance of existing adversarial attacks, more adversarial perturbations are found in the mid and high-frequency components of input data. Therefore, by suppressing the high-frequency content in the training phase, the models robustness against adversarial examples is improved. Experiments showed that the proposed defense method decreases the success rate of six attacks on PointNet, PointNet++ ,, and DGCNN models. In particular, improvements are achieved with an average increase of classification accuracy by 3.8 % on drop100 attack and 4.26 % on drop200 attack compared to the state-of-the-art methods. The method also improves models accuracy on the original dataset compared to other available methods.