论文标题
主动图像操纵检测
Proactive Image Manipulation Detection
论文作者
论文摘要
通常对图像操纵检测算法进行训练,以区分用特定的生成模型(GM)和真实/真实图像操纵的图像,但对训练中未见的GMS操纵的图像却概括了。常规检测算法被动地接收输入图像。相比之下,我们提出了一个主动的方案来对操作检测进行图像。我们的关键促成技术是估计一组模板,该模板将其添加到真实图像中将导致更准确的操作检测。也就是说,与原始的真实图像相比,它的模板受保护的真实图像及其操纵版本与其操纵图相比,可以更好地歧视。这些模板是根据模板的所需属性估算的。为了进行图像操作检测,我们提出的方法的表现优于先前的工作,而Cyclegan的平均精度为16%,Gaugan的平均精度为32%。我们的方法可以推广到各种GM,表明对先前工作的改善,平均精度在12克中平均为10%。我们的代码可在https://www.github.com/vishal3477/proactive_imd上找到。
Image manipulation detection algorithms are often trained to discriminate between images manipulated with particular Generative Models (GMs) and genuine/real images, yet generalize poorly to images manipulated with GMs unseen in the training. Conventional detection algorithms receive an input image passively. By contrast, we propose a proactive scheme to image manipulation detection. Our key enabling technique is to estimate a set of templates which when added onto the real image would lead to more accurate manipulation detection. That is, a template protected real image, and its manipulated version, is better discriminated compared to the original real image vs. its manipulated one. These templates are estimated using certain constraints based on the desired properties of templates. For image manipulation detection, our proposed approach outperforms the prior work by an average precision of 16% for CycleGAN and 32% for GauGAN. Our approach is generalizable to a variety of GMs showing an improvement over prior work by an average precision of 10% averaged across 12 GMs. Our code is available at https://www.github.com/vishal3477/proactive_IMD.