论文标题
测量介入的鲁棒性在增强学习中
Measuring Interventional Robustness in Reinforcement Learning
论文作者
论文摘要
强化学习的最新工作集中在学习的几个特征上,这些政策超出了最大化的奖励。这些特性包括公平,解释性,概括和鲁棒性。在本文中,我们定义了介入的鲁棒性(IR),这是通过培训程序的偶然方面(例如训练数据的顺序或代理商采取的特定探索性动作)将多少可变性引入到学识渊博的政策中。尽管培训程序的这些偶然方面有所不同,但在干预下采取非常相似的行动时,培训程序具有很高的IR。我们开发了一种直观的,定量的IR度量,并在数十个干预措施和状态的三个atari环境中对八种算法进行计算。从这些实验中,我们发现IR随训练和算法类型的量而变化,并且高性能并不意味着高IR,正如人们所期望的那样。
Recent work in reinforcement learning has focused on several characteristics of learned policies that go beyond maximizing reward. These properties include fairness, explainability, generalization, and robustness. In this paper, we define interventional robustness (IR), a measure of how much variability is introduced into learned policies by incidental aspects of the training procedure, such as the order of training data or the particular exploratory actions taken by agents. A training procedure has high IR when the agents it produces take very similar actions under intervention, despite variation in these incidental aspects of the training procedure. We develop an intuitive, quantitative measure of IR and calculate it for eight algorithms in three Atari environments across dozens of interventions and states. From these experiments, we find that IR varies with the amount of training and type of algorithm and that high performance does not imply high IR, as one might expect.