论文标题

反虐待:通过对抗性感知扰动进行隐形而强大的深层侵害攻击

Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations

论文作者

Wang, Run, Huang, Ziheng, Chen, Zhikai, Liu, Li, Chen, Jing, Wang, Lina

论文摘要

Deepfake正在成为社会的真正风险,由于深深的多媒体而对个人隐私和政治安全构成潜在威胁,这是现实和令人信服的。但是,流行的深层被动检测是前法医的对策,未能预先阻止虚假信息扩散。为了解决这一局限性,研究人员通过将对抗性噪声添加到源数据中以破坏深层操作来研究主动的防御技术。但是,现有的关于通过注入对抗噪声的主动深层防御的研究并不强大,可以通过在最近的一项研究Magdr中揭示的简单图像重建可以轻松绕开。 在本文中,我们研究了现有伪造技术的脆弱性,并提出了一种新颖的\ emph {抗虐待}技术,该技术可帮助用户保护能够应用流行伪造技术的攻击者保护共同的面部图像。我们提出的方法以不断的方式产生知觉感知的扰动,这与以前的研究差异很大,通过添加稀疏的对抗性噪声。实验结果表明,我们感知感知的扰动对各种图像变换具有鲁棒性,尤其是竞争性逃避技术,通过图像重构MAGDR。我们的发现有可能为对感知意识的对抗性攻击的彻底理解和调查打开新的研究方向,以积极而健壮的方式保护面部图像免受深层侵害。我们开源工具以促进未来的研究。代码可在https://github.com/abstractteen/antiforgery/上找到。

DeepFake is becoming a real risk to society and brings potential threats to both individual privacy and political security due to the DeepFaked multimedia are realistic and convincing. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and failed in blocking the disinformation spreading in advance. To address this limitation, researchers study the proactive defense techniques by adding adversarial noises into the source data to disrupt the DeepFake manipulation. However, the existing studies on proactive DeepFake defense via injecting adversarial noises are not robust, which could be easily bypassed by employing simple image reconstruction revealed in a recent study MagDR. In this paper, we investigate the vulnerability of the existing forgery techniques and propose a novel \emph{anti-forgery} technique that helps users protect the shared facial images from attackers who are capable of applying the popular forgery techniques. Our proposed method generates perceptual-aware perturbations in an incessant manner which is vastly different from the prior studies by adding adversarial noises that is sparse. Experimental results reveal that our perceptual-aware perturbations are robust to diverse image transformations, especially the competitive evasion technique, MagDR via image reconstruction. Our findings potentially open up a new research direction towards thorough understanding and investigation of perceptual-aware adversarial attack for protecting facial images against DeepFakes in a proactive and robust manner. We open-source our tool to foster future research. Code is available at https://github.com/AbstractTeen/AntiForgery/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源