论文标题
在多转化的对话中,用对比度学习重写话语
Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue
论文作者
论文摘要
上下文建模在构建多转化对话系统中起着重要作用。为了充分利用上下文信息,系统可以使用不完整的话语重写(IUR)方法通过将当前的话语和上下文信息合并为独立的话语,将多转向对话简化为单转。但是,以前的方法忽略了原始查询和重写查询之间的意图一致性。可以进一步改善原始查询中省略或核心效率的检测。在本文中,我们引入了对比度学习和多任务学习,以共同对问题进行建模。我们的方法受益于精心设计的自我监督目标,这些目标充当辅助任务,以捕获句子级别和令牌级别的语义。实验表明,我们提出的模型在几个公共数据集上实现了最先进的性能。
Context modeling plays a significant role in building multi-turn dialogue systems. In order to make full use of context information, systems can use Incomplete Utterance Rewriting(IUR) methods to simplify the multi-turn dialogue into single-turn by merging current utterance and context information into a self-contained utterance. However, previous approaches ignore the intent consistency between the original query and rewritten query. The detection of omitted or coreferred locations in the original query can be further improved. In this paper, we introduce contrastive learning and multi-task learning to jointly model the problem. Our method benefits from carefully designed self-supervised objectives, which act as auxiliary tasks to capture semantics at both sentence-level and token-level. The experiments show that our proposed model achieves state-of-the-art performance on several public datasets.