论文标题
样本有效质量多样性优化的多样性政策梯度
Diversity Policy Gradient for Sample Efficient Quality-Diversity Optimization
论文作者
论文摘要
大自然的一个有趣的方面在于它能够生产大量而多样的生物体,这些生物在其利基市场中都表现出色。相比之下,大多数AI算法都致力于找到给定问题的单一有效解决方案。除了绩效外,还要旨在进行多样性,这是处理在学习中起着核心作用的探索探索折衷方案的便捷方法。当返回的收集包含多个工作解决方案时,它还可以提高鲁棒性,使其适合于机器人技术等真实应用。质量多样性(QD)方法是为此目的而设计的进化算法。本文提出了一种新颖的算法QDPG,该算法结合了政策梯度算法和质量多样性方法的强度,以在连续控制环境中产生各种多样的和高性能的神经政策。这项工作的主要贡献是引入多样性策略梯度(DPG),该梯度在时间级上利用信息,以样本有效的方式推动政策朝着更多的多样性推动政策。具体而言,QDPG从MAP-ELITE网格中选择神经控制器,并使用两个基于梯度的突变算子来提高质量和多样性。我们的结果表明,QDPG比其进化竞争对手明显高得多。
A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single efficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, QDPG, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to drive policies towards more diversity in a sample-efficient manner. Specifically, QDPG selects neural controllers from a MAP-Elites grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that QDPG is significantly more sample-efficient than its evolutionary competitors.