论文标题
RSTAM:使用移动和紧凑的打印机对面部识别的有效的黑盒模仿攻击
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer
论文作者
论文摘要
近年来,由于深度神经网络的发展,面部识别取得了很大的进步,但最近发现深神经网络很容易受到对抗性例子的影响。这意味着基于深神经网络的面部识别模型或系统也容易受到对抗例子的影响。但是,攻击面部识别模型或具有对抗性示例的系统的现有方法可以有效地完成白色盒子攻击,而不是黑盒模仿攻击,物理攻击或方便的攻击,尤其是在商业面部识别系统上。在本文中,我们提出了一种攻击称为RSTAM的面部识别模型或系统的新方法,该方法可以使用由移动和紧凑的打印机打印的对抗性面膜进行有效的黑盒模仿攻击。首先,RSTAM通过我们提出的随机相似性转换策略来增强对抗性面罩的可传递性。此外,我们提出了一种随机的元优化策略,用于结合几种预训练的面部模型以产生更一般的对抗性面具。最后,我们在Celeba-HQ,LFW,化妆转移(MT)和Casia-Facev5数据集上进行实验。还对攻击的性能进行了对最新的商业面部识别系统的评估:Face ++,Baidu,Aliyun,Tencent和Microsoft。广泛的实验表明,RSTAM可以有效地对面部识别模型或系统进行黑盒模拟攻击。
Face recognition has achieved considerable progress in recent years thanks to the development of deep neural networks, but it has recently been discovered that deep neural networks are vulnerable to adversarial examples. This means that face recognition models or systems based on deep neural networks are also susceptible to adversarial examples. However, the existing methods of attacking face recognition models or systems with adversarial examples can effectively complete white-box attacks but not black-box impersonation attacks, physical attacks, or convenient attacks, particularly on commercial face recognition systems. In this paper, we propose a new method to attack face recognition models or systems called RSTAM, which enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer. First, RSTAM enhances the transferability of the adversarial masks through our proposed random similarity transformation strategy. Furthermore, we propose a random meta-optimization strategy for ensembling several pre-trained face models to generate more general adversarial masks. Finally, we conduct experiments on the CelebA-HQ, LFW, Makeup Transfer (MT), and CASIA-FaceV5 datasets. The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft. Extensive experiments show that RSTAM can effectively perform black-box impersonation attacks on face recognition models or systems.