论文标题

防御基于GAN的深层攻击攻击通过转化吸引的对抗面孔

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

论文作者

Yang, Chaofei, Ding, Lei, Chen, Yiran, Li, Hai

论文摘要

DeepFake代表了一系列面部交换攻击,该攻击利用机器学习模型(例如自动编码器或生成对抗网络)。尽管面部汇总的概念并不是什么新鲜事物,但其最近的技术进步使人类更现实和不可察觉。已经探索了有关深层攻击的各种检测技术。但是,这些方法是针对深击的被动措施,因为它们是在产生高质量的伪造内容后的缓解策略。更重要的是,我们想在强大的防御攻击者之前思考攻击者。这项工作旨在采取进攻措施来阻碍高质量的假图像或视频的产生。具体而言,我们建议使用新颖的转化感反感的面孔作为对基于GAN的Deepfake攻击的防御。与天真的对抗面不同,我们提出的方法利用了这一代过程中可区分的随机图像变换。我们还建议采用基于合奏的方法来增强针对黑色盒子设置下基于GAN的DeepFake变体的防御鲁棒性。我们表明,训练具有对抗性面孔的深层模型会导致合成面部质量的重大降解。这种降解是双重的。一方面,综合面孔的质量通过更多的视觉伪像降低,因此合成的面孔显然是假的,或者对人类观察者来说更令人信服。另一方面,可以根据各种指标轻松检测合成的面孔。

Deepfake represents a category of face-swapping attacks that leverage machine learning models such as autoencoders or generative adversarial networks. Although the concept of the face-swapping is not new, its recent technical advances make fake content (e.g., images, videos) more realistic and imperceptible to Humans. Various detection techniques for Deepfake attacks have been explored. These methods, however, are passive measures against Deepfakes as they are mitigation strategies after the high-quality fake content is generated. More importantly, we would like to think ahead of the attackers with robust defenses. This work aims to take an offensive measure to impede the generation of high-quality fake images or videos. Specifically, we propose to use novel transformation-aware adversarially perturbed faces as a defense against GAN-based Deepfake attacks. Different from the naive adversarial faces, our proposed approach leverages differentiable random image transformations during the generation. We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants under the black-box setting. We show that training a Deepfake model with adversarial faces can lead to a significant degradation in the quality of synthesized faces. This degradation is twofold. On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers. On the other hand, the synthesized faces can easily be detected based on various metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源