论文标题

探测神经语言模型以理解估计概率的单词

Probing neural language models for understanding of words of estimative probability

论文作者

Sileo, Damien, Moens, Marie-Francine

论文摘要

估计概率(WEP)的词是陈述合理性的表达(也许,可能,可能,可能,可能,不可能,不可能...)。在将数值概率水平分配给WEP时,多次调查表明了人类评估者的一致性。例如,很可能对应于Fagen-Ulmschneider(2015)的调查中0.90+-0.08的中位数。在这项工作中,我们衡量神经语言处理模型捕获与每个WEP相关的共识概率水平的能力。首先,我们使用UNLI数据集(Chen等,2020),该数据集将前提和假设与其感知的关节概率P相关联,以构建提示,例如“ [前提]。[WEP],[假设]。”并评估语言模型是否可以预测WEP共识概率水平是否接近p。其次,我们构建一个基于WEP的概率推理的数据集,以测试语言模型是否可以通过WEP组成推理。当提示“ [Eventa]可能是不可能的。[EventB]是不可能的。我们表明,这两个任务都无法通过现成的英语模型来解决,但是微调会导致可转移的改进。

Words of estimative probability (WEP) are expressions of a statement's plausibility (probably, maybe, likely, doubt, likely, unlikely, impossible...). Multiple surveys demonstrate the agreement of human evaluators when assigning numerical probability levels to WEP. For example, highly likely corresponds to a median chance of 0.90+-0.08 in Fagen-Ulmschneider (2015)'s survey. In this work, we measure the ability of neural language processing models to capture the consensual probability level associated to each WEP. Firstly, we use the UNLI dataset (Chen et al., 2020) which associates premises and hypotheses with their perceived joint probability p, to construct prompts, e.g. "[PREMISE]. [WEP], [HYPOTHESIS]." and assess whether language models can predict whether the WEP consensual probability level is close to p. Secondly, we construct a dataset of WEP-based probabilistic reasoning, to test whether language models can reason with WEP compositions. When prompted "[EVENTA] is likely. [EVENTB] is impossible.", a causal language model should not express that [EVENTA&B] is likely. We show that both tasks are unsolved by off-the-shelf English language models, but that fine-tuning leads to transferable improvement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源