论文标题

对抗光预测对面部识别系统的攻击:可行性研究

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

论文作者

Nguyen, Dinh-Luan, Arora, Sunpreet S., Wu, Yuhang, Yang, Hao

论文摘要

基于深度学习的系统已被证明容易受到数字和物理领域中对抗性攻击的影响。虽然可行,但数字攻击在攻击部署的系统(包括面部识别系统)中的适用性有限,其中对手通常可以访问输入而不是传输通道。在这种情况下,通过输入通道直接提供恶意输入的物理攻击构成了更大的威胁。我们研究使用对抗光预测对面部识别系统进行实时物理攻击的可行性。包括市售网络摄像头和投影仪的设置用于进行攻击。对手使用转换不变的对手模式生成方法,使用对手可用的目标图像或多个目标图像生成数字对抗模式。然后将数字对抗模式投射到物理领域中对手的脸上,以模仿目标(模仿)或逃避识别(浮标)。我们在50个受试者的池中使用两个开源和一个商业面部识别系统进行初步实验。我们的实验结果表明,面部识别系统在白色框和黑盒攻击设置中都可以轻射投影攻击。

Deep learning-based systems have been shown to be vulnerable to adversarial attacks in both digital and physical domains. While feasible, digital attacks have limited applicability in attacking deployed systems, including face recognition systems, where an adversary typically has access to the input and not the transmission channel. In such setting, physical attacks that directly provide a malicious input through the input channel pose a bigger threat. We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections. A setup comprising a commercially available web camera and a projector is used to conduct the attack. The adversary uses a transformation-invariant adversarial pattern generation method to generate a digital adversarial pattern using one or more images of the target available to the adversary. The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation). We conduct preliminary experiments using two open-source and one commercial face recognition system on a pool of 50 subjects. Our experimental results demonstrate the vulnerability of face recognition systems to light projection attacks in both white-box and black-box attack settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源