论文标题
向他人学习错误:避免数据集偏见而不对其进行建模
Learning from others' mistakes: Avoiding dataset biases without modeling them
论文作者
论文摘要
最先进的自然语言处理(NLP)模型通常会学会为数据集偏见和表面形式相关性建模,而不是针对预期的基础任务的功能。先前的工作已经证明了当有偏见的知识时,可以解决这些问题。我们考虑可能无法明确识别偏见问题的情况,并展示一种培训模型的方法,该模型学会忽略这些有问题的相关性。我们的方法依赖于这样的观察结果,即能力有限的模型主要学会利用数据集中的偏见。我们可以利用这种有限的容量模型的错误来训练专家的产品中更强大的模型,从而绕开了手工制作有偏见模型的需求。即使没有偏见的模型没有针对的特定偏见,我们还显示了这种方法保留过度分布设置的改进的有效性。
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.