论文标题
我们可以从人类对自然语言推理数据的集体意见中学到什么?
What Can We Learn from Collective Human Opinions on Natural Language Inference Data?
论文作者
论文摘要
尽管许多NLP任务具有主观性质,但大多数NLU评估都集中在使用大多数标签上,大概具有很高的一致性作为基础真理。人们对人类意见的分布的关注更少。我们收集Chaosnli,这是一个具有464,500个注释的数据集,以研究常见的NLI评估集中的集体人类意见。该数据集是通过根据SNLI和MNLI中的3,113个示例收集100个注释来创建该数据集的,而在绑架NLI中则创建了1,532个示例。分析表明:(1)在这些数据集中有大量示例中存在高度人类分歧; (2)最先进的模型缺乏恢复对人类标签的分布的能力; (3)模型以高度的人类一致性在数据子集上达到了几乎完美的准确性,而他们几乎无法随机猜测人类一致性低的数据,这构成了评估集中最先进的模型构成的大多数常见错误。这质疑评估数据集的低介绍部分改善旧指标模型性能的有效性。因此,我们主张对未来数据收集工作中的人类协议进行详细研究,并评估模型输出,以针对人类意见分配。 Chaosnli数据集和实验脚本可在https://github.com/easonnie/chaosnli上找到
Despite the subjective nature of many NLP tasks, most NLU evaluations have focused on using the majority label with presumably high agreement as the ground truth. Less attention has been paid to the distribution of human opinions. We collect ChaosNLI, a dataset with a total of 464,500 annotations to study Collective HumAn OpinionS in oft-used NLI evaluation sets. This dataset is created by collecting 100 annotations per example for 3,113 examples in SNLI and MNLI and 1,532 examples in Abductive-NLI. Analysis reveals that: (1) high human disagreement exists in a noticeable amount of examples in these datasets; (2) the state-of-the-art models lack the ability to recover the distribution over human labels; (3) models achieve near-perfect accuracy on the subset of data with a high level of human agreement, whereas they can barely beat a random guess on the data with low levels of human agreement, which compose most of the common errors made by state-of-the-art models on the evaluation sets. This questions the validity of improving model performance on old metrics for the low-agreement part of evaluation datasets. Hence, we argue for a detailed examination of human agreement in future data collection efforts, and evaluating model outputs against the distribution over collective human opinions. The ChaosNLI dataset and experimental scripts are available at https://github.com/easonnie/ChaosNLI