论文标题
极性:机器人技术的偏好优化和学习算法
POLAR: Preference Optimization and Learning Algorithms for Robotics
论文作者
论文摘要
机器人系统的参数调整是一项耗时且具有挑战性的任务,通常依赖于人类操作员的领域专业知识。此外,由于许多原因,现有的学习方法不适合参数调整,包括:缺乏“良好的机器人行为”的明确数值指标;由于依赖现实世界实验数据而导致的数据有限;以及参数组合的较大搜索空间。在这项工作中,我们为机器人工具箱(Polar)提供了一种开源MATLAB偏好优化和学习算法,用于系统地探索使用基于人类的人类偏好学习的高维参数空间。该工具箱的这个目的是系统,有效地实现两个目标之一:1)优化人类操作员偏好的机器人行为; 2)学习操作员的基本偏好格局,以更好地了解可调参数和操作员偏好之间的关系。极性工具箱仅使用主观反馈机制(成对的偏好,共同反馈和序数标签)来实现这些目标,以推断出贝叶斯后验,而不是基本的奖励功能决定用户偏好。我们证明了工具箱在模拟中的性能,并介绍了基于人类偏好的学习的各种应用。
Parameter tuning for robotic systems is a time-consuming and challenging task that often relies on domain expertise of the human operator. Moreover, existing learning methods are not well suited for parameter tuning for many reasons including: the absence of a clear numerical metric for `good robotic behavior'; limited data due to the reliance on real-world experimental data; and the large search space of parameter combinations. In this work, we present an open-source MATLAB Preference Optimization and Learning Algorithms for Robotics toolbox (POLAR) for systematically exploring high-dimensional parameter spaces using human-in-the-loop preference-based learning. This aim of this toolbox is to systematically and efficiently accomplish one of two objectives: 1) to optimize robotic behaviors for human operator preference; 2) to learn the operator's underlying preference landscape to better understand the relationship between adjustable parameters and operator preference. The POLAR toolbox achieves these objectives using only subjective feedback mechanisms (pairwise preferences, coactive feedback, and ordinal labels) to infer a Bayesian posterior over the underlying reward function dictating the user's preferences. We demonstrate the performance of the toolbox in simulation and present various applications of human-in-the-loop preference-based learning.