论文标题
大型语言模型可以真正理解提示吗?带有否定提示的案例研究
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
论文作者
论文摘要
先前的工作表明,语言模型(LMS)的大小(LMS)与其在不同下游NLP任务上的零射击性能之间存在缩放定律。在这项工作中,我们表明,在用否定提示评估大型LMS时,这种现象并不成立,而是显示出逆缩放定律。我们对(1)验证的LMS(OPT&GPT-3)的否定提示评估了9个不同的任务,该任务的不同尺寸(125m-175b),(2)LMS进一步预处理以概括为新的提示(指令gpt),(3)LMS,提供的示例很少,以及(4)LMS lms ficles fillms fighted tempuned temed temed Perfected newed Pressed newed Prectile tened Perfected newed Pressile;在比较原始提示和否定提示的平均得分时,所有LM类型在否定的提示上的表现都在否定的提示下表现出巨大的性能差距。通过强调对现有LMS和方法的关键局限,我们敦促社区开发开发实际遵循给定指示的LMS的新方法。我们提供代码和数据集,以探索https://github.com/joeljang/negated-prompts-for-llms的否定提示
Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms