论文标题

在传感器迁移下模仿学习和决策的因果转移

Causal Transfer for Imitation Learning and Decision Making under Sensor-shift

论文作者

Etesami, Jalal, Geiger, Philipp

论文摘要

从示范中学习(LFD)是培训AI代理的有效范式。但是,当(a)演示者自己的感觉输入之间存在差异时,就会出现主要问题,(b)我们观察演示者的传感器以及(c)我们训练的代理的感官输入。在本文中,我们提出了一个基于因果模型的框架,用于在此类“传感器偏移”下转移学习,以解决两个常见的LFD任务:(1)推断演示者行为的效果以及(2)模仿学习。首先,我们严格地分析了人口级别,可以在多大程度上从可用的观测值以及传感器特征的先验知识中确定并转移相关的潜在机制(行动效应和示威者政策)。我们为推断这些机制设备算法。然后,我们介绍了几种代理方法,这些代理方法比确切的解决方案更容易计算,从有限的数据中估算并解释,以及与确切确切的界限的理论界限。我们在模拟和半实数世界数据上验证了两种主要方法。

Learning from demonstrations (LfD) is an efficient paradigm to train AI agents. But major issues arise when there are differences between (a) the demonstrator's own sensory input, (b) our sensors that observe the demonstrator and (c) the sensory input of the agent we train. In this paper, we propose a causal model-based framework for transfer learning under such "sensor-shifts", for two common LfD tasks: (1) inferring the effect of the demonstrator's actions and (2) imitation learning. First we rigorously analyze, on the population-level, to what extent the relevant underlying mechanisms (the action effects and the demonstrator policy) can be identified and transferred from the available observations together with prior knowledge of sensor characteristics. And we device an algorithm to infer these mechanisms. Then we introduce several proxy methods which are easier to calculate, estimate from finite data and interpret than the exact solutions, alongside theoretical bounds on their closeness to the exact ones. We validate our two main methods on simulated and semi-real world data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源