论文标题
自我监督的单眼场景流量估计
Self-Supervised Monocular Scene Flow Estimation
论文作者
论文摘要
场景流量估计已引起人们对3D环境感知的越来越多的关注。单眼场景流量估计 - 从两个时间连续的图像中获得3D结构和3D运动 - 是一个高度不良的问题,迄今为止缺乏实用的解决方案。我们提出了一种新型的单眼场景流方法,可产生竞争精度和实时性能。通过采用反问题视图,我们设计了一个单一的卷积神经网络(CNN),该神经网络(CNN)成功地从经典的光流成本量中同时估算了深度和3D运动。我们采用具有3D损失功能和遮挡推理的自我监督学习来利用未标记的数据。我们验证我们的设计选择,包括代理损失和增强设置。我们的模型在无监督/自学的学习方法中实现了最先进的精度,以实现单眼场景流量,并为光流和单眼深度估计子任务带来竞争结果。半监督的微调进一步提高了准确性,并实时产生有希望的结果。
Scene flow estimation has been receiving increasing attention for 3D environment perception. Monocular scene flow estimation -- obtaining 3D structure and 3D motion from two temporally consecutive images -- is a highly ill-posed problem, and practical solutions are lacking to date. We propose a novel monocular scene flow method that yields competitive accuracy and real-time performance. By taking an inverse problem view, we design a single convolutional neural network (CNN) that successfully estimates depth and 3D motion simultaneously from a classical optical flow cost volume. We adopt self-supervised learning with 3D loss functions and occlusion reasoning to leverage unlabeled data. We validate our design choices, including the proxy loss and augmentation setup. Our model achieves state-of-the-art accuracy among unsupervised/self-supervised learning approaches to monocular scene flow, and yields competitive results for the optical flow and monocular depth estimation sub-tasks. Semi-supervised fine-tuning further improves the accuracy and yields promising results in real-time.