论文标题

从外部单眼RGB摄像头间接对象对机器人姿势估计

Indirect Object-to-Robot Pose Estimation from an External Monocular RGB Camera

论文作者

Tremblay, Jonathan, Tyree, Stephen, Mosier, Terry, Birchfield, Stan

论文摘要

我们提出了一种机器人抓握系统,该系统使用单一外部单眼RGB摄像头作为输入。对象对机器人姿势是通过组合两个神经网络的输出来间接计算的:一个估计对象对相机姿势的姿势,另一个估计机器人到相机姿势的姿势。两个网络均经过综合数据的训练,依靠域随机化来弥合SIM到实现的空隙。由于后一个网络执行在线相机校准,因此可以在执行过程中自由移动相机而不会影响掌握质量。实验结果分析了在掌握几个家用物体的背景下,相机放置,图像分辨率和姿势改进的效果。我们还为28种有纹理的家用玩具杂货对象提供了新的结果,这些物品已被选为其他研究人员可以访问。为了帮助研究的可重复性,我们提供了3D扫描的纹理模型,以及预先训练的权重进行姿势估算。

We present a robotic grasping system that uses a single external monocular RGB camera as input. The object-to-robot pose is computed indirectly by combining the output of two neural networks: one that estimates the object-to-camera pose, and another that estimates the robot-to-camera pose. Both networks are trained entirely on synthetic data, relying on domain randomization to bridge the sim-to-real gap. Because the latter network performs online camera calibration, the camera can be moved freely during execution without affecting the quality of the grasp. Experimental results analyze the effect of camera placement, image resolution, and pose refinement in the context of grasping several household objects. We also present results on a new set of 28 textured household toy grocery objects, which have been selected to be accessible to other researchers. To aid reproducibility of the research, we offer 3D scanned textured models, along with pre-trained weights for pose estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源