论文标题
奖励模型的缩放定律过度优化
Scaling Laws for Reward Model Overoptimization
论文作者
论文摘要
在从人类反馈中学习的强化学习中,通常是针对培训以预测人类偏好的奖励模型进行优化的。因为奖励模型是不完善的代理,因此根据古哈特定律,优化其价值过多会阻碍地面真相表现。经常观察到这种效果,但由于收集人类偏好数据的费用,未仔细测量。在这项工作中,我们使用合成设置,其中固定的“金标准”奖励模型扮演人类的角色,提供用于训练代理奖励模型的标签。我们研究使用强化学习或最佳$ n $采样时针对代理奖励模型进行优化时,黄金奖励模型得分如何变化。我们发现,这种关系遵循不同的功能形式,具体取决于优化的方法,并且在这两种情况下,其系数都与奖励模型参数的数量平滑地缩放。我们还研究了奖励模型数据集规模,奖励模型和策略参数的数量以及kl惩罚的系数的影响,在强化学习设置中增加了奖励。我们探讨了这些经验结果对AI对齐中理论考虑的含义。
In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart's law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed "gold-standard" reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-$n$ sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment.