论文标题
AACC:在上下文加强学习中不对称的演员 - 批评
AACC: Asymmetric Actor-Critic in Contextual Reinforcement Learning
论文作者
论文摘要
强化学习(RL)技术在许多具有挑战性的任务中引起了极大的关注,但是当应用于现实世界问题时,其性能会大大恶化。已经提出了各种方法,例如域随机化,以通过不同的环境设置下的培训代理来应对这种情况,因此可以将它们推广到部署期间的不同环境。但是,它们通常不包含与代理相互作用正确相互作用的潜在环境因素信息,因此在面对周围环境变化时可能会过于保守。在本文中,我们首先将适应RL中的环境动态的任务形式化为使用上下文Markov决策过程(CMDP)的概括问题。然后,我们在上下文RL(AACC)中提出了不对称的参与者 - 作为处理此类概括任务的端到端参与者的方法。我们在一系列模拟环境中证明了AACC对现有基线的性能的基本改进。
Reinforcement Learning (RL) techniques have drawn great attention in many challenging tasks, but their performance deteriorates dramatically when applied to real-world problems. Various methods, such as domain randomization, have been proposed to deal with such situations by training agents under different environmental setups, and therefore they can be generalized to different environments during deployment. However, they usually do not incorporate the underlying environmental factor information that the agents interact with properly and thus can be overly conservative when facing changes in the surroundings. In this paper, we first formalize the task of adapting to changing environmental dynamics in RL as a generalization problem using Contextual Markov Decision Processes (CMDPs). We then propose the Asymmetric Actor-Critic in Contextual RL (AACC) as an end-to-end actor-critic method to deal with such generalization tasks. We demonstrate the essential improvements in the performance of AACC over existing baselines experimentally in a range of simulated environments.