论文标题
面部伪造检测的广义零和少量转移
Generalized Zero and Few-Shot Transfer for Facial Forgery Detection
论文作者
论文摘要
我们提出了深层分配转移(DDT),这是一种新的转移学习方法,以解决面部伪造检测的零和少量转移问题。我们研究了一个伪造方法训练的模型(预 - 预 - )对以前看不见的操纵技术或不同数据集的概括。为了促进这种转移,我们引入了一种新的基于混合模型的损失公式,该公式学习了多模式分布,模式对应于源伪造方法的基础数据的类别类别。我们的核心思想是首先预先培训编码器神经网络,该网络将此分布的每种模式映射到各个类标签,即,通过最大程度地减少它们之间的Wasserstein距离,在源域中的真实图像或假图像。为了将此模型转移到新的域,我们将一些目标样本与先前训练的模式之一相关联。此外,我们提出了一种空间混合策略,该策略进一步有助于跨领域的概括。我们发现,与传统的分类甚至最新的域适应性/少量学习方法相比,这种学习策略在领域转移方面具有出乎意料的有效性。例如,与最佳基线相比,我们的方法将零拍摄的分类精度提高了4.88%,而从FaceForensics ++转移到Dessa DataSet的几种弹出案例的分类精度则提高了8.38%。
We propose Deep Distribution Transfer(DDT), a new transfer learning approach to address the problem of zero and few-shot transfer in the context of facial forgery detection. We examine how well a model (pre-)trained with one forgery creation method generalizes towards a previously unseen manipulation technique or different dataset. To facilitate this transfer, we introduce a new mixture model-based loss formulation that learns a multi-modal distribution, with modes corresponding to class categories of the underlying data of the source forgery method. Our core idea is to first pre-train an encoder neural network, which maps each mode of this distribution to the respective class labels, i.e., real or fake images in the source domain by minimizing wasserstein distance between them. In order to transfer this model to a new domain, we associate a few target samples with one of the previously trained modes. In addition, we propose a spatial mixup augmentation strategy that further helps generalization across domains. We find this learning strategy to be surprisingly effective at domain transfer compared to a traditional classification or even state-of-the-art domain adaptation/few-shot learning methods. For instance, compared to the best baseline, our method improves the classification accuracy by 4.88% for zero-shot and by 8.38% for the few-shot case transferred from the FaceForensics++ to Dessa dataset.