论文标题
隐性身份泄漏:改善深泡检测概括的绊脚石
Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization
论文作者
论文摘要
在本文中,我们分析了二进制分类器在深层检测任务中的概括能力。我们发现,其概括的绊脚石是由图像上意外的学习身份表示引起的。这种现象被称为隐式身份泄漏,在各种DNN中经过定性和定量验证。此外,基于这种理解,我们提出了一种名为ID-Unaware DeepFake检测模型的简单而有效的方法,以减少这种现象的影响。广泛的实验结果表明,我们的方法的表现都超过了数据库和跨数据库评估中的最新方法。该代码可在https://github.com/megvii-research/caddm上找到。
In this paper, we analyse the generalization ability of binary classifiers for the task of deepfake detection. We find that the stumbling block to their generalization is caused by the unexpected learned identity representation on images. Termed as the Implicit Identity Leakage, this phenomenon has been qualitatively and quantitatively verified among various DNNs. Furthermore, based on such understanding, we propose a simple yet effective method named the ID-unaware Deepfake Detection Model to reduce the influence of this phenomenon. Extensive experimental results demonstrate that our method outperforms the state-of-the-art in both in-dataset and cross-dataset evaluation. The code is available at https://github.com/megvii-research/CADDM.