论文标题
对抗性的自我监督学习示例:良好的概括以进行深层检测
Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection
论文作者
论文摘要
当训练和测试伪造来自同一数据集时,深膜检测的最新研究已经产生了令人鼓舞的结果。但是,当人们试图将检测器推广到训练数据集中未见方法创建的伪造时,问题仍然具有挑战性。这项工作探讨了一个简单的原则可概括的深层检测:可推广的表示应对各种类型的伪造敏感。遵循这一原则,我们建议通过通过大量伪造构型合成增强伪造来丰富伪造的“多样性”,并通过强制执行模型来预测伪造构型,从而增强对伪造的“敏感性”。为了有效地探索大型伪造的增强空间,我们进一步建议使用对抗性训练策略,以动态地综合当前模型的最具挑战性的伪造。通过广泛的实验,我们表明所提出的策略令人惊讶地有效(见图1),它们比当前的最新方法可以取得更高的性能。代码可在\ url {https://github.com/liangchen527/sladd}中获得。
Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen methods in the training dataset. This work addresses the generalizable deepfake detection from a simple principle: a generalizable representation should be sensitive to diverse types of forgeries. Following this principle, we propose to enrich the "diversity" of forgeries by synthesizing augmented forgeries with a pool of forgery configurations and strengthen the "sensitivity" to the forgeries by enforcing the model to predict the forgery configurations. To effectively explore the large forgery augmentation space, we further propose to use the adversarial training strategy to dynamically synthesize the most challenging forgeries to the current model. Through extensive experiments, we show that the proposed strategies are surprisingly effective (see Figure 1), and they could achieve superior performance than the current state-of-the-art methods. Code is available at \url{https://github.com/liangchen527/SLADD}.