论文标题
Warpinggan:为对抗3D点云的扭曲多个统一先生
WarpingGAN: Warping Multiple Uniform Priors for Adversarial 3D Point Cloud Generation
论文作者
论文摘要
我们建议Warpinggan,这是一个有效而有效的3D点云生成网络。与现有的方法通过直接学习潜在代码和3D形状之间的映射功能不同的方法不同,Warping-Gan学习统一的本地磁臂功能,以扭曲多个相同的预定义的先验(即,在常规3D网格上均匀分布在常规3D网格上)将3D Shapes驱动到本地结构semant-aware semant-aware semantics。此外,我们还巧妙地利用了鉴别器的原理并量身定制缝线损失,以消除与不同先验的生成形状的不同分区之间的差距,以提高质量。由于新颖的生成机制,一次性训练后的单个轻量级网络Warpinggan能够有效地产生具有各种分辨率的均匀分布的3D点云。广泛的实验结果表明,在定量指标,视觉质量和效率方面,我们的Warpinggan优于最先进的方法。源代码可在https://github.com/yztang4/warpinggan.git上公开获得。
We propose WarpingGAN, an effective and efficient 3D point cloud generation network. Unlike existing methods that generate point clouds by directly learning the mapping functions between latent codes and 3D shapes, Warping-GAN learns a unified local-warping function to warp multiple identical pre-defined priors (i.e., sets of points uniformly distributed on regular 3D grids) into 3D shapes driven by local structure-aware semantics. In addition, we also ingeniously utilize the principle of the discriminator and tailor a stitching loss to eliminate the gaps between different partitions of a generated shape corresponding to different priors for boosting quality. Owing to the novel generating mechanism, WarpingGAN, a single lightweight network after one-time training, is capable of efficiently generating uniformly distributed 3D point clouds with various resolutions. Extensive experimental results demonstrate the superiority of our WarpingGAN over state-of-the-art methods in terms of quantitative metrics, visual quality, and efficiency. The source code is publicly available at https://github.com/yztang4/WarpingGAN.git.