论文标题

对鲁棒和适应性运动原理的政策产品的生成对抗培训

Generative adversarial training of product of policies for robust and adaptive movement primitives

论文作者

Pignat, Emmanuel, Girgin, Hakan, Calinon, Sylvain

论文摘要

在从示范中学习时,许多轨迹的生成模型都简化了独立性的假设。以学习阶段的障碍和速度的名义牺牲正确性。 忽略的依赖性通常是系统的运动学和动态约束,仅在合成运动时才会恢复,这可能引入可能的扭曲。 在这项工作中,我们建议将这些近似轨迹分布用作流行的生成对抗框架中的近距离歧视因子,以稳定和加速学习过程。 我们的方法解决了适应性和鲁棒性的两个问题。 为了使动议适应不同的上下文,我们建议使用在几个参数化任务空间中定义的高斯政策的产物。通过使用随机梯度下降和集合方法来学习随机动力学,可以确保对扰动和不同动力学的鲁棒性。在7-DOF操纵器上进行了两个实验,以验证该方法。

In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源