论文标题

信念的平稳表示(3),以进行深入旋转学习的不确定性

A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty

论文作者

Peretroukhin, Valentin, Giamou, Matthew, Rosen, David M., Greene, W. Nicholas, Roy, Nicholas, Kelly, Jonathan

论文摘要

准确的旋转估计是机器人感知任务的核心,例如视觉探测器和对象姿势估计。深度神经网络为执行这些任务提供了一种新的方法,而旋转表示的选择是网络设计的重要组成部分。 In this work, we present a novel symmetric matrix representation of the 3D rotation group, SO(3), with two important properties that make it particularly suitable for learned models: (1) it satisfies a smoothness property that improves convergence and generalization when regressing large rotation targets, and (2) it encodes a symmetric Bingham belief over the space of unit quaternions, permitting the training of uncertainty-aware models.我们通过培训两种数据模式的深层神经旋转回归器来验证我们制定的好处。首先,我们使用综合点云数据来表明我们的表示形式比现有的任意旋转目标具有优越的预测精度。其次,我们使用在地面和飞机上收集的图像数据来证明我们的表示形式可以有效地分布(OOD)拒绝技术,从而显着提高了旋转估算的稳健性,从而无需看见的环境效应和损坏的输入图像,而无需使用明显的可能性丢失,即将造成的样本,或者进行了分类的smalliary smplecliary smplecliary auxiel salliary auxiel sermpleififififififififififif。此功能是对安全至关重要的应用的关键,在这种应用中,检测新的投入可以防止学识渊博的模型的灾难性失败。

Accurate rotation estimation is at the heart of robot perception tasks such as visual odometry and object pose estimation. Deep neural networks have provided a new way to perform these tasks, and the choice of rotation representation is an important part of network design. In this work, we present a novel symmetric matrix representation of the 3D rotation group, SO(3), with two important properties that make it particularly suitable for learned models: (1) it satisfies a smoothness property that improves convergence and generalization when regressing large rotation targets, and (2) it encodes a symmetric Bingham belief over the space of unit quaternions, permitting the training of uncertainty-aware models. We empirically validate the benefits of our formulation by training deep neural rotation regressors on two data modalities. First, we use synthetic point-cloud data to show that our representation leads to superior predictive accuracy over existing representations for arbitrary rotation targets. Second, we use image data collected onboard ground and aerial vehicles to demonstrate that our representation is amenable to an effective out-of-distribution (OOD) rejection technique that significantly improves the robustness of rotation estimates to unseen environmental effects and corrupted input images, without requiring the use of an explicit likelihood loss, stochastic sampling, or an auxiliary classifier. This capability is key for safety-critical applications where detecting novel inputs can prevent catastrophic failure of learned models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源