论文标题
Cu-net:耦合的U-NET唯一的激光雷达深度完成
CU-Net: LiDAR Depth-Only Completion With Coupled U-Net
论文作者
论文摘要
仅激光雷达深度完成是一项具有挑战性的任务,以估计仅从LIDAR获得的稀疏测量点估算密集的深度图。即使仅开发了仅深度方法,但使用RGB引导的方法仍然存在显着的性能差距,该方法利用了额外的颜色图像。我们发现,现有的仅深度方法可以在测量点几乎准确且均匀分布(表示为正常区域)的区域中获得令人满意的结果,而在前景和背景点在闭合区域和背景点重叠的区域的限制(表示为重叠区域)以及在不存在的区域(不适合这些区域)的区域(这些区域)中,这些区域不适合这些方法。在这些观察结果的基础上,我们提出了一个有效的耦合U-NET(CU-NET)架构,以实现仅深度完成。我们没有直接使用大型网络进行回归,而是利用本地U-NET来估计正常区域中的准确值,并在重叠和空白区域中为全局U-NET提供可靠的初始值。两个耦合的U-NET预测的深度图被学识渊博的置信图融合在一起以获得最终结果。此外,我们提出了一个基于置信的异常值删除模块,该模块使用简单的判断条件去除异常值。我们提出的方法以更少的参数增强了最终结果,并在KITTI基准测试中实现了最先进的结果。此外,它具有在各种深度密度,不同的照明和天气状况下具有强大的概括能力。
LiDAR depth-only completion is a challenging task to estimate dense depth maps only from sparse measurement points obtained by LiDAR. Even though the depth-only methods have been widely developed, there is still a significant performance gap with the RGB-guided methods that utilize extra color images. We find that existing depth-only methods can obtain satisfactory results in the areas where the measurement points are almost accurate and evenly distributed (denoted as normal areas), while the performance is limited in the areas where the foreground and background points are overlapped due to occlusion (denoted as overlap areas) and the areas where there are no measurement points around (denoted as blank areas) since the methods have no reliable input information in these areas. Building upon these observations, we propose an effective Coupled U-Net (CU-Net) architecture for depth-only completion. Instead of directly using a large network for regression, we employ the local U-Net to estimate accurate values in the normal areas and provide the global U-Net with reliable initial values in the overlap and blank areas. The depth maps predicted by the two coupled U-Nets are fused by learned confidence maps to obtain final results. In addition, we propose a confidence-based outlier removal module, which removes outliers using simple judgment conditions. Our proposed method boosts the final results with fewer parameters and achieves state-of-the-art results on the KITTI benchmark. Moreover, it owns a powerful generalization ability under various depth densities, varying lighting, and weather conditions.