论文标题
3DSNET:无监督的形状到形状3D样式转移
3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer
论文作者
论文摘要
将样式从一个图像转移到另一个图像是一项流行且广泛研究的计算机视觉任务。但是,在3D设置中的样式转移仍然是一个未开发的问题。据我们所知,我们提出了基于分离的内容和样式表示的3D对象之间样式转移的第一种基于学习的方法。所提出的方法可以以点云和网格的形式合成新的3D形状,结合源和目标3D模型的内容和样式,以生成一种新颖的形状,在保留源内容的同时,以类似于目标的样式。此外,我们扩展了我们的技术,以隐式学习所选域的多模式样式分布。通过从学习分布中抽样样式代码,我们增加了模型可以赋予输入形状的各种样式。实验结果验证了拟议的3D样式转移方法对许多基准测试的有效性。我们的框架的实施将在接受后发布。
Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based approach for style transfer between 3D objects based on disentangled content and style representations. The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes, combining the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content. Furthermore, we extend our technique to implicitly learn the multimodal style distribution of the chosen domains. By sampling style codes from the learned distributions, we increase the variety of styles that our model can confer to an input shape. Experimental results validate the effectiveness of the proposed 3D style transfer method on a number of benchmarks. The implementation of our framework will be released upon acceptance.