论文标题

Cu-net:实时高保真颜色的颜色提高点云

CU-Net: Real-Time High-Fidelity Color Upsampling for Point Clouds

论文作者

Wang, Lingdong, Hajiesmaili, Mohammad, Chakareski, Jacob, Sitaraman, Ramesh K.

论文摘要

由于现有技术的捕获,处理和通信限制,Point Cloud上采样对于高质量的增强现实,虚拟现实和触觉应用是必不可少的。尽管已经对几何形状进行了更大的化采样,以使点云的坐标致密,但颜色属性的上升采样很大程度上被忽略了。在本文中,我们提出了Cu-net,这是第一个深度学习点云颜色上采样模型,可实现低潜伏期和高视觉保真度操作。 Cu-net通过基于稀疏卷积和基于神经隐式函数的颜色预测模块来利用特征提取器来实现线性时间和空间的复杂性。因此,从理论上讲,CU-NET比具有二次复杂性的大多数现有方法更有效。实验结果表明,Cu-net可以实时用近一百万分的时间为光子逼真的点云着色,同时比基线的视觉性能要好得多。此外,Cu-net可以适应任意提升比和看不见的对象而无需重新培训。我们的源代码可在https://github.com/umass-lids/cunet上找到。

Point cloud upsampling is essential for high-quality augmented reality, virtual reality, and telepresence applications, due to the capture, processing, and communication limitations of existing technologies. Although geometry upsampling to densify a point cloud's coordinates has been well studied, the upsampling of the color attributes has been largely overlooked. In this paper, we propose CU-Net, the first deep-learning point cloud color upsampling model that enables low latency and high visual fidelity operation. CU-Net achieves linear time and space complexity by leveraging a feature extractor based on sparse convolution and a color prediction module based on neural implicit function. Therefore, CU-Net is theoretically guaranteed to be more efficient than most existing methods with quadratic complexity. Experimental results demonstrate that CU-Net can colorize a photo-realistic point cloud with nearly a million points in real time, while having notably better visual performance than baselines. Besides, CU-Net can adapt to arbitrary upsampling ratios and unseen objects without retraining. Our source code is available at https://github.com/UMass-LIDS/cunet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源