论文标题

UBAR:使用GPT-2完全端到端以任务为导向的对话系统

UBAR: Towards Fully End-to-End Task-Oriented Dialog Systems with GPT-2

论文作者

Yang, Yunyi, Li, Yunhao, Quan, Xiaojun

论文摘要

本文介绍了我们面向任务的对话框UBAR,该系统在对话框会话级别上建模面向任务的对话框。具体而言,通过微调整个对话框会话的顺序,通过微调大型预训练的单向语言模型GPT-2来获取UBAR,该语言由用户话语,信念状态,数据库结果,系统ACT和每个对话框转动的系统响应组成。此外,在更现实的环境中评估了UBAR,其对话框上下文可以访问用户话语及其生成的所有内容,例如信念状态,系统行为和系统响应。多WOZ数据集的实验结果表明,Ubar在多个设置中实现了最先进的性能,从而将响应生成,策略优化和端到端建模的综合分别提高了4.7、3.5和9.4点。彻底的分析表明,会话级训练序列公式和生成的对话框上下文对于Ubar在现实生活中作为完全面向任务的对话框系统运行至关重要。我们还检查了UBAR到具有有限数据的新域的转移能力,并提供可视化和案例研究,以说明UBAR在对话过程级别上建模中的优势。

This paper presents our task-oriented dialog system UBAR which models task-oriented dialogs on a dialog session level. Specifically, UBAR is acquired by fine-tuning the large pre-trained unidirectional language model GPT-2 on the sequence of the entire dialog session which is composed of user utterance, belief state, database result, system act, and system response of every dialog turn. Additionally, UBAR is evaluated in a more realistic setting, where its dialog context has access to user utterances and all content it generated such as belief states, system acts, and system responses. Experimental results on the MultiWOZ datasets show that UBAR achieves state-of-the-art performances in multiple settings, improving the combined score of response generation, policy optimization, and end-to-end modeling by 4.7, 3.5, and 9.4 points respectively. Thorough analyses demonstrate that the session-level training sequence formulation and the generated dialog context are essential for UBAR to operate as a fully end-to-end task-oriented dialog system in real life. We also examine the transfer ability of UBAR to new domains with limited data and provide visualization and a case study to illustrate the advantages of UBAR in modeling on a dialog session level.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源