论文标题
零标签提示选择
Zero-Label Prompt Selection
论文作者
论文摘要
自然语言提示已被证明可以促进大型语言模型的交叉任务概括。但是,在没有标记的示例或有限的示例的情况下,交叉任务的性能对提示的选择高度敏感,而选择高性能提示的同时,鉴于标签的稀缺性,选择高性能提示是具有挑战性的。为了解决该问题,我们提出了一个零标签提示选择(ZPS)方法,该方法在没有任何标记的数据或梯度更新的情况下选择提示。具体而言,鉴于候选人写的人为任务提示,ZPS标记了一组带有及时集合的未标记数据,并使用伪标签来及时选择。实验表明,ZP通过零标签性能的差距来改善先前方法。我们还将ZPS扩展到几次设置,并显示其优于强大基线的优势,例如迅速调整和模型调整。
Natural language prompts have been shown to facilitate cross-task generalization for large language models. However, with no or limited labeled examples, the cross-task performance is highly sensitive to the choice of prompts, while selecting a high-performing prompt is challenging given the scarcity of labels. To address the issue, we propose a Zero-Label Prompt Selection (ZPS) method that selects prompts without any labeled data or gradient update. Specifically, given the candidate human-written prompts for a task, ZPS labels a set of unlabeled data with a prompt ensemble and uses the pseudo-labels for prompt selection. Experiments show that ZPS improves over prior methods by a sizeable margin in zero-label performance. We also extend ZPS to a few-shot setting and show its advantages over strong baselines such as prompt tuning and model tuning.