论文标题

部分可观测时空混沌系统的无模型预测

Do Prompts Solve NLP Tasks Using Natural Language?

论文作者

Yang, Sen, Zhang, Yunchen, Cui, Leyang, Zhang, Yue

论文摘要

得益于大型预训练语言模型的高级改进,及时基于基于及时的微调对各种下游任务有效。尽管已经研究了许多提示方法,但在三种提示中最有效的提示仍然未知(即人为设计的提示,架构提示和空提示)。在这项工作中,我们从经验上比较了几种和完全监督的设置下的三种提示。我们的实验结果表明,架构提示通常是最有效的。此外,当训练数据的规模增长时,性能差距往往会减少。

Thanks to the advanced improvement of large pre-trained language models, prompt-based fine-tuning is shown to be effective on a variety of downstream tasks. Though many prompting methods have been investigated, it remains unknown which type of prompts are the most effective among three types of prompts (i.e., human-designed prompts, schema prompts and null prompts). In this work, we empirically compare the three types of prompts under both few-shot and fully-supervised settings. Our experimental results show that schema prompts are the most effective in general. Besides, the performance gaps tend to diminish when the scale of training data grows large.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源