论文标题

DVGG:右旋操纵的深度差异生成

DVGG: Deep Variational Grasp Generation for Dextrous Manipulation

论文作者

Wei, Wei, Li, Daheng, Wang, Peng, Li, Yiming, Li, Wanyi, Luo, Yongkang, Zhong, Jun

论文摘要

与平行爪抓手相比,用拟人型机器人手抓握涉及更多的手动相互作用。建模手动相互作用对于研究多指手动敏感操作至关重要。这项工作提出了DVGG,这是一个有效的掌握生成网络,将单视图视为输入,并预测未知对象的高质量掌握配置。通常,我们的生成模型由三个组成部分组成:1)基于部分观察的目标对象的点云完成; 2)鉴于完整的点云,各种grasps生成集; 3)迭代的抓握姿势细化,以进行物理上合理的掌握优化。为了训练我们的模型,我们构建了一个大规模的握把数据集,该数据集包含大约300个常见对象模型,并在模拟中带有150万个注释的grasps。模拟中的实验表明,我们的模型可以预测具有多种多样性和高成功率的强大掌握姿势。真正的机器人平台实验表明,在我们的数据集中训练的模型在现实世界中的表现良好。值得注意的是,我们的方法在真实机器人平台中的新物体中获得了70.7%的成功率,这对基线方法是一个重大改进。

Grasping with anthropomorphic robotic hands involves much more hand-object interactions compared to parallel-jaw grippers. Modeling hand-object interactions is essential to the study of multi-finger hand dextrous manipulation. This work presents DVGG, an efficient grasp generation network that takes single-view observation as input and predicts high-quality grasp configurations for unknown objects. In general, our generative model consists of three components: 1) Point cloud completion for the target object based on the partial observation; 2) Diverse sets of grasps generation given the complete point cloud; 3) Iterative grasp pose refinement for physically plausible grasp optimization. To train our model, we build a large-scale grasping dataset that contains about 300 common object models with 1.5M annotated grasps in simulation. Experiments in simulation show that our model can predict robust grasp poses with a wide variety and high success rate. Real robot platform experiments demonstrate that the model trained on our dataset performs well in the real world. Remarkably, our method achieves a grasp success rate of 70.7\% for novel objects in the real robot platform, which is a significant improvement over the baseline methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源