论文标题

操纵数据

Data Augmentation for Manipulation

论文作者

Mitrano, Peter, Berenson, Dmitry

论文摘要

深度学习的成功在很大程度上取决于大型数据集的可用性,但是在机器人操作中,这些数据集不存在许多学习问题。收集这些数据集是耗时且昂贵的,因此从小型数据集中学习是一个重要的开放问题。在计算机视觉中,缺乏数据的常见方法是数据增加。数据增强是通过修改现有培训示例来创建其他培训示例的过程。但是,由于任务和数据的类型不同,因此计算机视觉中使用的方法无法轻易适应操作。因此,我们提出了一种用于机器人操作的数据增强方法。我们认为增强应该是有效,相​​关和多样化的。我们使用这些原则将增强性形式化为一个优化问题,其目标函数来自物理学和对操作域的知识。该方法将刚体转换应用于几何状态和动作数据的轨迹。我们在两种情况下测试我们的方法:1)学习刚性圆柱体的平面推动动力学,以及2)学习一个约束检查器进行绳索操纵。这两种情况有不同的数据和标签类型,但是在两种情况下,对我们的增强数据进行培训可显着提高下游任务的性能。我们还展示了如何在实体机器人数据上使用我们的增强方法,以启用更多数据有效的在线学习。

The success of deep learning depends heavily on the availability of large datasets, but in robotic manipulation there are many learning problems for which such datasets do not exist. Collecting these datasets is time-consuming and expensive, and therefore learning from small datasets is an important open problem. Within computer vision, a common approach to a lack of data is data augmentation. Data augmentation is the process of creating additional training examples by modifying existing ones. However, because the types of tasks and data differ, the methods used in computer vision cannot be easily adapted to manipulation. Therefore, we propose a data augmentation method for robotic manipulation. We argue that augmentations should be valid, relevant, and diverse. We use these principles to formalize augmentation as an optimization problem, with the objective function derived from physics and knowledge of the manipulation domain. This method applies rigid body transformations to trajectories of geometric state and action data. We test our method in two scenarios: 1) learning the dynamics of planar pushing of rigid cylinders, and 2) learning a constraint checker for rope manipulation. These two scenarios have different data and label types, yet in both scenarios, training on our augmented data significantly improves performance on downstream tasks. We also show how our augmentation method can be used on real-robot data to enable more data-efficient online learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源