论文标题
基于自动渗透
Unsupervised Contrastive Photo-to-Caricature Translation based on Auto-distortion
论文作者
论文摘要
照片对形式的翻译旨在通过素描,铅笔笔划或其他艺术图纸夸大了特征的渲染图像。样式渲染和几何形状变形是照片到式翻译任务中最重要的方面。要考虑到这两种因素,我们提出了一个无监督的对比照片对形式翻译体系结构。考虑到现有方法中的直观人工制品,我们提出了样式渲染的对比风格损失,以实施渲染照片和漫画风格之间的相似性,并同时增强其对照片的差异。为了以不成对/无监督的方式获得夸张的变形,我们提出了一个失真预测模块(DPM),以预测每个输入图像的一组位移向量,同时固定某些控制点,然后进行薄板条键插值以进行扭曲。该模型接受了未配对的照片和漫画的培训,而可以通过输入照片或漫画来提供双向合成。广泛的实验表明,与现有竞争对手相比,提出的模型可以有效地产生像漫画一样的手绘。
Photo-to-caricature translation aims to synthesize the caricature as a rendered image exaggerating the features through sketching, pencil strokes, or other artistic drawings. Style rendering and geometry deformation are the most important aspects in photo-to-caricature translation task. To take both into consideration, we propose an unsupervised contrastive photo-to-caricature translation architecture. Considering the intuitive artifacts in the existing methods, we propose a contrastive style loss for style rendering to enforce the similarity between the style of rendered photo and the caricature, and simultaneously enhance its discrepancy to the photos. To obtain an exaggerating deformation in an unpaired/unsupervised fashion, we propose a Distortion Prediction Module (DPM) to predict a set of displacements vectors for each input image while fixing some controlling points, followed by the thin plate spline interpolation for warping. The model is trained on unpaired photo and caricature while can offer bidirectional synthesizing via inputting either a photo or a caricature. Extensive experiments demonstrate that the proposed model is effective to generate hand-drawn like caricatures compared with existing competitors.