论文标题

汽车:有趣的多代理环境的自动设计

AutoDIME: Automatic Design of Interesting Multi-Agent Environments

论文作者

Kanitscheider, Ingmar, Edwards, Harri

论文摘要

设计环境的分布,其中RL代理可以学习有趣且有用的技能是一个具有挑战性且理解不足的任务,对于多机构环境而言,困难只会加剧。一种方法是训练第二个RL代理,称为老师,该代理商采样了有利于学习学生代理商的环境。但是,大多数先前关于教师奖励的建议并未直接概括到多代理设置。我们检查了一组从预测问题中获得的内在教师奖励,这些奖励可以在多代理设置中应用,并在诸如多代理隐藏和寻求诸如诊断性的单格迷宫任务之类的Mujoco任务中对其进行评估。在考虑到的内在奖励中,我们发现价值分歧在整个任务之间是最一致的,从而导致藏匿和寻求和迷宫任务的高级技能的更快,更可靠的出现。考虑的另一个候选人的内在奖励,价值预测错误,在隐藏和寻求方面也很好地工作,但在随机环境中易受嘈杂的电视风格干扰。政策分歧在迷宫任务中表现良好,但并没有加快在藏匿中的学习。我们的结果表明,内在的教师奖励,特别是价值分歧,是自动化单一和多代理环境设计的一种有前途的方法。

Designing a distribution of environments in which RL agents can learn interesting and useful skills is a challenging and poorly understood task, for multi-agent environments the difficulties are only exacerbated. One approach is to train a second RL agent, called a teacher, who samples environments that are conducive for the learning of student agents. However, most previous proposals for teacher rewards do not generalize straightforwardly to the multi-agent setting. We examine a set of intrinsic teacher rewards derived from prediction problems that can be applied in multi-agent settings and evaluate them in Mujoco tasks such as multi-agent Hide and Seek as well as a diagnostic single-agent maze task. Of the intrinsic rewards considered we found value disagreement to be most consistent across tasks, leading to faster and more reliable emergence of advanced skills in Hide and Seek and the maze task. Another candidate intrinsic reward considered, value prediction error, also worked well in Hide and Seek but was susceptible to noisy-TV style distractions in stochastic environments. Policy disagreement performed well in the maze task but did not speed up learning in Hide and Seek. Our results suggest that intrinsic teacher rewards, and in particular value disagreement, are a promising approach for automating both single and multi-agent environment design.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源