论文标题

端到端可区分的6DOF对象姿势估计具有本地和全局约束

End-to-End Differentiable 6DoF Object Pose Estimation with Local and Global Constraints

论文作者

Gupta, Anshul, Medhi, Joydeep, Chattopadhyay, Aratrik, Gupta, Vikram

论文摘要

从单个RGB图像中推断物体的6DOF姿势是一项重要但具有挑战性的任务,尤其是在重度遮挡下。尽管最近的方法通过培训端到端管道来改善两阶段方法,但它们不利用本地和全球限制。在本文中,我们提出了成对特征提取以整合局部约束,并提出三重态度正规化,以整合全局约束,以改善6DOF对象姿势估计。再加上更好的增强,我们的方法在具有挑战性的胶囊数据集上实现了最新的结果,比以前的最新状态提高了9%,并在LineMod数据集上取得了竞争成果。

Inferring the 6DoF pose of an object from a single RGB image is an important but challenging task, especially under heavy occlusion. While recent approaches improve upon the two stage approaches by training an end-to-end pipeline, they do not leverage local and global constraints. In this paper, we propose pairwise feature extraction to integrate local constraints, and triplet regularization to integrate global constraints for improved 6DoF object pose estimation. Coupled with better augmentation, our approach achieves state of the art results on the challenging Occlusion Linemod dataset, with a 9% improvement over the previous state of the art, and achieves competitive results on the Linemod dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源