论文标题

数据限制的6D对象姿势估计的透视流汇总

Perspective Flow Aggregation for Data-Limited 6D Object Pose Estimation

论文作者

Hu, Yinlin, Fua, Pascal, Salzmann, Mathieu

论文摘要

最近的6D对象构成估计方法,包括无监督的方法,需要许多真实的训练图像。不幸的是,对于某些应用,例如在空间或深处的应用程序中,获得真实的图像,即使是未经注释的应用,几乎是不可能的。在本文中,我们提出了一种可以仅在合成图像上训练的方法,也可以选择使用一些其他真实图像。鉴于从第一个网络获得的粗糙姿势估计值,它使用第二个网络来预测使用粗糙姿势和真实图像呈现的图像之间的密集2D对应字段,并渗透了所需的姿势校正。与最新方法相比,这种方法对合成图像和真实图像之间的域变化敏感得多。它与需要注释的真实图像进行训练时的方法执行,在不使用任何图像时进行训练,并且在使用二十个真实图像时,它们的表现要优于它们。

Most recent 6D object pose estimation methods, including unsupervised ones, require many real training images. Unfortunately, for some applications, such as those in space or deep under water, acquiring real images, even unannotated, is virtually impossible. In this paper, we propose a method that can be trained solely on synthetic images, or optionally using a few additional real ones. Given a rough pose estimate obtained from a first network, it uses a second network to predict a dense 2D correspondence field between the image rendered using the rough pose and the real image and infers the required pose correction. This approach is much less sensitive to the domain shift between synthetic and real images than state-of-the-art methods. It performs on par with methods that require annotated real images for training when not using any, and outperforms them considerably when using as few as twenty real images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源