论文标题

心理信息链的提示提示大语模型中隐喻理解

Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models

论文作者

Prystawski, Ben, Thibodeau, Paul, Potts, Christopher, Goodman, Noah D.

论文摘要

语言理解的概率模型是调查人类语言使用的宝贵工具。但是,需要对特定域进行手工设计。相比之下,大型语言模型(LLM)是对跨越各种领域的文本进行训练的,但它们缺乏概率模型的结构和解释性。在本文中,我们使用思想链提示将概率模型的结构引入LLMS。在隐喻理解的情况下,我们探讨了这种方法。我们的思想链促使Lead语言模型推断出潜在变量及其关系的理由,以便为隐喻选择适当的释义。所选择的潜在变量和关系是由认知心理学理解理论得出的。我们将这些提示应用于GPT-3的两个最大版本,并表明它们可以在释义选择任务中提高性能。

Probabilistic models of language understanding are valuable tools for investigating human language use. However, they need to be hand-designed for a particular domain. In contrast, large language models (LLMs) are trained on text that spans a wide array of domains, but they lack the structure and interpretability of probabilistic models. In this paper, we use chain-of-thought prompts to introduce structures from probabilistic models into LLMs. We explore this approach in the case of metaphor understanding. Our chain-of-thought prompts lead language models to infer latent variables and reason about their relationships in order to choose appropriate paraphrases for metaphors. The latent variables and relationships chosen are informed by theories of metaphor understanding from cognitive psychology. We apply these prompts to the two largest versions of GPT-3 and show that they can improve performance in a paraphrase selection task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源