论文标题

repmix:表示合成图像的鲁棒归因的表示形式混合

RepMix: Representation Mixing for Robust Attribution of Synthesized Images

论文作者

Bui, Tu, Yu, Ning, Collomosse, John

论文摘要

生成对抗网络(GAN)的快速进步为图像归因提出了新的挑战;检测图像是否合成,并确定哪些GAN体系结构创建它。独特的是,我们为该任务提供了一个解决方案,能够1)将图像与其语义内容匹配; 2)随着图像在线重新共享时,通常会遇到的良性转换(质量,分辨率,形状等)的良好转化。为了形式化我们的研究,收集了一个具有挑战性的基准归因88,以出于稳健而实用的图像归因。然后,我们提出了基于代表性混合和新颖损失的GAN指纹技术Repmix。我们验证了其追踪gan生成图像的出处的能力,不在图像的语义内容上不变,并且对扰动也有鲁棒性。我们表明,我们的方法从现有的GAN指纹识别方面显着提高,既有语义泛化和稳健性。数据和代码可从https://github.com/tubui/image_attribution获得。

Rapid advances in Generative Adversarial Networks (GANs) raise new challenges for image attribution; detecting whether an image is synthetic and, if so, determining which GAN architecture created it. Uniquely, we present a solution to this task capable of 1) matching images invariant to their semantic content; 2) robust to benign transformations (changes in quality, resolution, shape, etc.) commonly encountered as images are re-shared online. In order to formalize our research, a challenging benchmark, Attribution88, is collected for robust and practical image attribution. We then propose RepMix, our GAN fingerprinting technique based on representation mixing and a novel loss. We validate its capability of tracing the provenance of GAN-generated images invariant to the semantic content of the image and also robust to perturbations. We show our approach improves significantly from existing GAN fingerprinting works on both semantic generalization and robustness. Data and code are available at https://github.com/TuBui/image_attribution.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源