论文标题

深入机器人抓取的概括和数据有效学习

Towards Generalization and Data Efficient Learning of Deep Robotic Grasping

论文作者

Chen, Zhixin, Lin, Mengxiang, Jia, Zhixin, Jian, Shibo

论文摘要

深入的强化学习(DRL)已被证明是自主学习复杂控制政策的强大范式。 DRL在机器人掌握中的最新应用已成功训练了DRL机器人端到端,将视觉输入直接映射到控制指令中,但是所需的培训数据量可能会阻碍这些应用程序在实践中。在本文中,我们提出了一个基于DRL的机器人视觉抓握框架,其中视觉感知和控制策略是单独训练的,而不是端到端的。视觉感知会产生对握把对象的物理描述,并且该策略利用它们来决定基于DRL的最佳动作。受益于对象的明确表示,该策略有望赋予对新对象和环境的更多概括力。此外,该政策可以在模拟中进行培训,并在实际机器人系统中转移,而无需任何进一步的培训。我们在现实世界的机器人系统中评估了许多机器人抓握任务,例如语义抓握,群集对象抓握,移动对象抓握。结果显示了我们系统的鲁棒性和概括。

Deep reinforcement learning (DRL) has been proven to be a powerful paradigm for learning complex control policy autonomously. Numerous recent applications of DRL in robotic grasping have successfully trained DRL robotic agents end-to-end, mapping visual inputs into control instructions directly, but the amount of training data required may hinder these applications in practice. In this paper, we propose a DRL based robotic visual grasping framework, in which visual perception and control policy are trained separately rather than end-to-end. The visual perception produces physical descriptions of grasped objects and the policy takes use of them to decide optimal actions based on DRL. Benefiting from the explicit representation of objects, the policy is expected to be endowed with more generalization power over new objects and environments. In addition, the policy can be trained in simulation and transferred in real robotic system without any further training. We evaluate our framework in a real world robotic system on a number of robotic grasping tasks, such as semantic grasping, clustered object grasping, moving object grasping. The results show impressive robustness and generalization of our system.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源