论文标题
学习转换感知图像取证的嵌入
Learning Transformation-Aware Embeddings for Image Forensics
论文作者
论文摘要
互联网上操纵图像内容流的急剧上升导致了媒体取证研究界的积极反应。新的努力纳入了从计算机视觉和机器学习中提高技术的使用,以检测和介绍图像操作的空间。本文介绍了图像出处分析,该分析旨在发现共享内容的不同操纵图像版本之间的关系。尚未直接解决的出处分析的主要子问题之一是编辑共享内容或近乎解释的图像。为任务(例如对象识别)生成图像描述符的现有大型网络可能无法编码这些图像协变量之间的细微差异。本文介绍了一种新颖的基于深度学习的方法,以向从单个图像通过转换产生的图像提供合理的订购。我们的方法通过合成的转换和基于等级的四倍体损失来学习转换感知的描述符。为了确定所提出方法的功效,与最先进的手工制作和深度学习的描述符进行了比较,并进行了图像匹配方法。进一步的实验验证了图像出处分析的背景下所提出的方法。
A dramatic rise in the flow of manipulated image content on the Internet has led to an aggressive response from the media forensics research community. New efforts have incorporated increased usage of techniques from computer vision and machine learning to detect and profile the space of image manipulations. This paper addresses Image Provenance Analysis, which aims at discovering relationships among different manipulated image versions that share content. One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates. The existing large networks that generate image descriptors for tasks such as object recognition may not encode the subtle differences between these image covariates. This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations. Our approach learns transformation-aware descriptors using weak supervision via composited transformations and a rank-based quadruplet loss. To establish the efficacy of the proposed approach, comparisons with state-of-the-art handcrafted and deep learning-based descriptors, and image matching approaches are made. Further experimentation validates the proposed approach in the context of image provenance analysis.