论文标题
任何服装的深度细节增强
Deep Detail Enhancement for Any Garment
论文作者
论文摘要
创建精细的服装细节需要巨大的努力和庞大的计算资源。相反,在许多情况下(例如,通过基于物理的模拟,由骨骼运动驱动的线性混合皮肤,便携式扫描仪)的粗糙形状可能很容易获得。在本文中,我们展示了如何以数据驱动的方式增强,从粗糙的服装几何形状开始。一旦给出了服装的参数化,我们将任务作为样式转移问题在相关的正常地图的空间上作为样式转移问题。为了促进服装类型和角色动作的概括,我们引入了基于贴片的配方,该配方通过匹配基于克矩阵的样式损失,从而产生高分辨率的细节,从而幻觉几何细节(即皱纹密度和形状)。我们在各种生产方案上广泛评估了我们的方法,并表明我们的方法是简单,轻巧,高效的,并且在底层服装类型,缝纫模式和身体运动中进行了推广。
Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion.