论文标题

迅速学习以任务为导向的对话中的域适应

Prompt Learning for Domain Adaptation in Task-Oriented Dialogue

论文作者

Sreedhar, Makesh Narsimhan, Parisien, Christopher

论文摘要

在创建制作质量的以任务为导向的对话系统时,对话设计师将继续面临重大障碍。模式开发和数据收集中涉及的复杂性和成本通常是此类设计师的主要障碍,限制了他们创造自然,用户友好的体验的能力。我们将用户意图的分类构架为使用自然语言的轻巧语义表示的生成。我们表明,规范形式为意图分类提供了一种有希望的替代方法。通过调整冻结大型语言模型的软提示,我们表明,在零或几次设置中,典型的形式非常适合新的,看不见的域。该方法也是样本有效的,可减少开发新的面向任务对话域的复杂性和工作。

Conversation designers continue to face significant obstacles when creating production quality task-oriented dialogue systems. The complexity and cost involved in schema development and data collection is often a major barrier for such designers, limiting their ability to create natural, user-friendly experiences. We frame the classification of user intent as the generation of a canonical form, a lightweight semantic representation using natural language. We show that canonical forms offer a promising alternative to traditional methods for intent classification. By tuning soft prompts for a frozen large language model, we show that canonical forms generalize very well to new, unseen domains in a zero- or few-shot setting. The method is also sample-efficient, reducing the complexity and effort of developing new task-oriented dialogue domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源