论文标题
语义保存素描嵌入面部生成
Semantics-Preserving Sketch Embedding for Face Generation
论文作者
论文摘要
随着图像到图像翻译任务的最新进展,在从草图中产生面部图像时,已经看到了显着的进步。但是,现有方法经常无法生成具有语义和几何形式与输入草图一致的详细信息的图像,尤其是在绘制各种装饰笔划时。为了解决这个问题,我们介绍了一种新颖的W-W+编码器体系结构,以利用W+空间的高表达能力以及W空间的语义可控性。我们引入了用于草图语义嵌入的显式中间表示。通过语义功能匹配损失,我们的草图嵌入精确地将输入草图中的语义传达到合成的图像中。此外,一种新颖的草图语义解释方法旨在自动从矢量化草图中提取语义。我们在合成的草图和手绘草图上进行了广泛的实验,结果证明了我们方法比在语义上提供和泛化能力的现有方法的优越性。
With recent advances in image-to-image translation tasks, remarkable progress has been witnessed in generating face images from sketches. However, existing methods frequently fail to generate images with details that are semantically and geometrically consistent with the input sketch, especially when various decoration strokes are drawn. To address this issue, we introduce a novel W-W+ encoder architecture to take advantage of the high expressive power of W+ space and semantic controllability of W space. We introduce an explicit intermediate representation for sketch semantic embedding. With a semantic feature matching loss for effective semantic supervision, our sketch embedding precisely conveys the semantics in the input sketches to the synthesized images. Moreover, a novel sketch semantic interpretation approach is designed to automatically extract semantics from vectorized sketches. We conduct extensive experiments on both synthesized sketches and hand-drawn sketches, and the results demonstrate the superiority of our method over existing approaches on both semantics-preserving and generalization ability.