论文标题

后门攻击:后门攻击消除数据偏差的良性应用

Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias

论文作者

Wu, Shangxi, He, Qiuyang, Zhang, Yi, Sang, Jitao

论文摘要

后门攻击是近年来出现的一种新的AI安全风险。利用先前的对抗攻击研究,我们认为后门攻击有可能利用模型学习过程并改善模型性能。基于后门攻击中的清洁精度下降(CAD),我们发现CAD是出于伪删除数据的影响。我们从模型分类边界的角度提供了对这一现象的初步解释,并观察到这种伪删除比数据偏见问题中的直接删除具有优势。根据上述发现,我们提出了对后门攻击(DBA)的辩护。它在偏见任务中实现了SOTA,并且应用程序场景比底漆更广泛。

Backdoor attack is a new AI security risk that has emerged in recent years. Drawing on the previous research of adversarial attack, we argue that the backdoor attack has the potential to tap into the model learning process and improve model performance. Based on Clean Accuracy Drop (CAD) in backdoor attack, we found that CAD came out of the effect of pseudo-deletion of data. We provided a preliminary explanation of this phenomenon from the perspective of model classification boundaries and observed that this pseudo-deletion had advantages over direct deletion in the data debiasing problem. Based on the above findings, we proposed Debiasing Backdoor Attack (DBA). It achieves SOTA in the debiasing task and has a broader application scenario than undersampling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源