论文标题

Onepose:一击对象姿势估计没有CAD模型

OnePose: One-Shot Object Pose Estimation without CAD Models

论文作者

Sun, Jiaming, Wang, Zihao, Zhang, Siyu, He, Xingyi, Zhao, Hongcheng, Zhang, Guofeng, Zhou, Xiaowei

论文摘要

我们提出了一种名为OnePose的新方法,用于对象姿势估计。与现有的实例级别或类别级别的方法不同,OnePose不依赖CAD模型,并且可以在没有实例或类别的网络培训的情况下处理任意类别中的对象。 OnePose从视觉本地化中汲取了想法,并且仅需要对对象的简单RGB视频扫描来构建对象的稀疏SFM模型。然后,该模型通过通用功能匹配网络注册到新的查询图像。为了减轻现有视觉定位方法的缓慢运行时,我们提出了一个新的图形注意网络,该网络将查询图像中的2D兴趣点与SFM模型中的3D点直接匹配,从而实现了有效且稳健的姿势估计。结合一个基于功能的姿势跟踪器,OnePose能够实时稳定地检测和跟踪6D姿势。我们还收集了一个由450个对象组成的大规模数据集,该数据集由150个对象组成。

We propose a new method named OnePose for object pose estimation. Unlike existing instance-level or category-level methods, OnePose does not rely on CAD models and can handle objects in arbitrary categories without instance- or category-specific network training. OnePose draws the idea from visual localization and only requires a simple RGB video scan of the object to build a sparse SfM model of the object. Then, this model is registered to new query images with a generic feature matching network. To mitigate the slow runtime of existing visual localization methods, we propose a new graph attention network that directly matches 2D interest points in the query image with the 3D points in the SfM model, resulting in efficient and robust pose estimation. Combined with a feature-based pose tracker, OnePose is able to stably detect and track 6D poses of everyday household objects in real-time. We also collected a large-scale dataset that consists of 450 sequences of 150 objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源