论文标题

通过隐式形状增强学习强大的现实世界灵巧抓住政策

Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation

论文作者

Chen, Zoey Qiuyu, Van Wyk, Karl, Chao, Yu-Wei, Yang, Wei, Mousavian, Arsalan, Gupta, Abhishek, Fox, Dieter

论文摘要

灵巧的机器人手有能力与各种各样的家用物体相互作用,以执行诸如抓握之类的任务。但是,由于难以生成高质量的培训数据,学习强大的现实世界对任意物体的掌握政策已被证明是具有挑战性的。在这项工作中,我们提出了一个学习系统(ISAGRASP),以利用少量的人类演示来引导一代更大的数据集的生成,该数据集包含在各种新颖对象上的成功掌握。我们的关键见解是使用对应感知的隐性生成模型将对象网格变形并展示人类的掌握,以生成多样化的新颖对象数据集和成功的掌握,以进行监督学习,同时保持语义现实主义。我们使用此数据集在模拟中训练强大的掌握策略,该策略可以在现实世界中部署。我们在模拟和现实世界中使用四指的Allegro手展示了抓地力,并证明这种方法可以处理全新的语义类别,并在现实世界中掌握看不见的对象方面取得了79%的成功率。

Dexterous robotic hands have the capability to interact with a wide variety of household objects to perform tasks like grasping. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (ISAGrasp) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to generate a diverse dataset of novel objects and successful grasps for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes and achieve a 79% success rate on grasping unseen objects in the real world.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源