论文标题
基准为相对姿势回归的视觉惯性深度融合和探空仪的绝对姿势回归
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
论文作者
论文摘要
视觉惯性定位是计算机视觉和机器人技术应用中的关键问题,例如虚拟现实,自动驾驶汽车和航空车。目的是在已知环境或动力学时估计对象的准确姿势。绝对姿势回归(APR)技术直接使用卷积和时空网络从已知场景中的图像输入中回归绝对姿势。进程方法执行相对姿势回归(RPR),该方法可预测已知对象动态(视觉或惯性输入)的相对姿势。可以通过从两个数据源检索跨模式设置的信息来改进本地化任务,这是一个挑战性的问题,这是由于矛盾的任务。在这项工作中,我们进行了基准测试,以根据姿势图优化和注意力网络评估深层多模式融合。辅助和贝叶斯学习用于APR任务。我们显示了APR-RPR任务的准确性改进以及用于航空车辆和手持设备的RPR-RPR任务。我们在Euroc Mav和Penncosyvio数据集上进行实验,并记录并评估一个新型的行业数据集。
Visual-inertial localization is a key problem in computer vision and robotics applications such as virtual reality, self-driving cars, and aerial vehicles. The goal is to estimate an accurate pose of an object when either the environment or the dynamics are known. Absolute pose regression (APR) techniques directly regress the absolute pose from an image input in a known scene using convolutional and spatio-temporal networks. Odometry methods perform relative pose regression (RPR) that predicts the relative pose from a known object dynamic (visual or inertial inputs). The localization task can be improved by retrieving information from both data sources for a cross-modal setup, which is a challenging problem due to contradictory tasks. In this work, we conduct a benchmark to evaluate deep multimodal fusion based on pose graph optimization and attention networks. Auxiliary and Bayesian learning are utilized for the APR task. We show accuracy improvements for the APR-RPR task and for the RPR-RPR task for aerial vehicles and hand-held devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets and record and evaluate a novel industry dataset.