论文标题

通过推断深神经网络的多指抓握计划

Multi-Fingered Grasp Planning via Inference in Deep Neural Networks

论文作者

Lu, Qingkai, Van der Merwe, Mark, Sundaralingam, Balakumar, Hermans, Tucker

论文摘要

我们提出了一种新颖的方法,用于多指掌握计划,以利用学习深度的神经网络模型。我们训练基于体素的3D卷积神经网络,以预测成功的概率,这是对象的视觉信息和掌握配置的函数。然后,我们可以制定掌握计划,以推断出掌握配置,从而最大程度地提高了掌握成功的可能性。此外,我们学习了先前的掌握配置作为基于体素的对象表示条件的混合密度网络。 我们表明,与学习的,与对象无关的先验或不知情的统一先验相比,与学习的掌握成功预测网络一起使用时,此对象有条件的先验提高了推断。我们的工作是第一个使用深层神经网络直接在配置空间中直接计划高质量多指的抓地力的工作,而无需外部计划者。我们验证我们在物理机器人上执行多指抓握的推理方法。我们的实验结果表明,我们的计划方法优于神经网络的现有掌握计划方法。

We propose a novel approach to multi-fingered grasp planning leveraging learned deep neural network models. We train a voxel-based 3D convolutional neural network to predict grasp success probability as a function of both visual information of an object and grasp configuration. We can then formulate grasp planning as inferring the grasp configuration which maximizes the probability of grasp success. In addition, we learn a prior over grasp configurations as a mixture density network conditioned on our voxel-based object representation. We show that this object conditional prior improves grasp inference when used with the learned grasp success prediction network when compared to a learned, object-agnostic prior, or an uninformed uniform prior. Our work is the first to directly plan high quality multi-fingered grasps in configuration space using a deep neural network without the need of an external planner. We validate our inference method performing multi-finger grasping on a physical robot. Our experimental results show that our planning method outperforms existing grasp planning methods for neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源