论文标题
R2P:从mmwave雷达到点云的深度学习模型
R2P: A Deep Learning Model from mmWave Radar to Point Cloud
论文作者
论文摘要
最近的研究表明,在低可见性环境中,MMWave雷达传感对物体检测的有效性,这使其成为自主导航系统中的理想技术。在本文中,我们将雷达介绍给Point Cloud(R2P),这是一个深度学习模型,该模型基于带有MMWave Radar的粗糙和稀疏点云的3D对象的平滑,密集和高度准确的点云表示。这些输入点云是从由原始MMWave雷达传感器数据生成的2D深度图像转换的,其特征是不一致,方向和形状误差。 R2P利用两个顺序深度学习编码器块的体系结构在从多个角度观察到这些基于雷达的对象的输入点云的基本特征,并确保生成的输出点云的内部一致性及其准确且详细的原始对象形状重构。我们实施R2P来替换我们最近提出的3DRIMR(3D重建和通过MMWave Radar)系统的第2阶段。我们的实验证明了R2P在流行的现有方法(例如PointNet,PCN和原始3DRIMR设计)上的显着性能提高。
Recent research has shown the effectiveness of mmWave radar sensing for object detection in low visibility environments, which makes it an ideal technique in autonomous navigation systems. In this paper, we introduce Radar to Point Cloud (R2P), a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object with fine geometry details, based on rough and sparse point clouds with incorrect points obtained from mmWave radar. These input point clouds are converted from the 2D depth images that are generated from raw mmWave radar sensor data, characterized by inconsistency, and orientation and shape errors. R2P utilizes an architecture of two sequential deep learning encoder-decoder blocks to extract the essential features of those radar-based input point clouds of an object when observed from multiple viewpoints, and to ensure the internal consistency of a generated output point cloud and its accurate and detailed shape reconstruction of the original object. We implement R2P to replace Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system. Our experiments demonstrate the significant performance improvement of R2P over the popular existing methods such as PointNet, PCN, and the original 3DRIMR design.