论文标题
通过因果中介分析测试预训练的语言模型对分发性的理解
Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation Analysis
论文作者
论文摘要
预先训练的语言模型在多大程度上了解了关于分发性现象的语义知识?在本文中,我们介绍了Distnli,这是一种新的自然语言推理诊断数据集,该数据集针对分布式引起的语义差异,并采用因果中介分析框架来量化模型行为并探索此语义相关任务中的基本机制。我们发现,模型的理解程度与模型大小和词汇大小有关。我们还提供了有关模型如何编码这种高级语义知识的见解。
To what extent do pre-trained language models grasp semantic knowledge regarding the phenomenon of distributivity? In this paper, we introduce DistNLI, a new diagnostic dataset for natural language inference that targets the semantic difference arising from distributivity, and employ the causal mediation analysis framework to quantify the model behavior and explore the underlying mechanism in this semantically-related task. We find that the extent of models' understanding is associated with model size and vocabulary size. We also provide insights into how models encode such high-level semantic knowledge.