论文标题

FlowStep3d:自我监督场景流估计的模型展开

FlowStep3D: Model Unrolling for Self-Supervised Scene Flow Estimation

论文作者

Kittenplon, Yair, Eldar, Yonina C., Raviv, Dan

论文摘要

估计场景中点的3D运动(称为场景流)是计算机视觉中的核心问题。旨在学习端到端3D流的传统基于学习的方法通常会遭受泛化的不良。在这里,我们提出了一个经常性的体系结构,该体系结构学习了一个展开的迭代对齐过程的单一步骤,以完善场景流动预测。受经典算法的启发,我们证明了使用强正态化向溶液的迭代收敛。所提出的方法可以处理较大的时间变形,并提出比竞争性的全面相关方法更苗条的体系结构。我们的网络仅在Flythings3D合成数据上进行了培训,成功地概括了实际扫描,从而在Kitti自制基准的基准上大大优于所有现有方法。

Estimating the 3D motion of points in a scene, known as scene flow, is a core problem in computer vision. Traditional learning-based methods designed to learn end-to-end 3D flow often suffer from poor generalization. Here we present a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions. Inspired by classical algorithms, we demonstrate iterative convergence toward the solution using strong regularization. The proposed method can handle sizeable temporal deformations and suggests a slimmer architecture than competitive all-to-all correlation approaches. Trained on FlyingThings3D synthetic data only, our network successfully generalizes to real scans, outperforming all existing methods by a large margin on the KITTI self-supervised benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源