论文标题
多转向对话中的两级监督对比度学习,以进行响应选择
Two-Level Supervised Contrastive Learning for Response Selection in Multi-Turn Dialogue
论文作者
论文摘要
鉴于多转向对话中的话语,从许多候选人中选择适当的回应是基于检索的对话系统的关键问题。现有的工作将任务正式化为话语与候选人之间的匹配,并在学习模型学习中使用跨凝结损失。本文通过使用监督的对比损失将对比度学习应用于问题。通过这种方式,可以在嵌入空间中更遥远地分离出积极示例和负面示例的表示形式,并且可以增强匹配的性能。我们进一步开发了一种新的监督对比学习方法,称为两级监督对比学习,并在多转向对话中采用该方法进行响应选择。我们的方法利用了两种技术:用于监督对比学习的句子令牌改组(STS)和句子重新排序(SR)。三个基准数据集的实验结果表明,所提出的方法显着优于对比度学习基线和任务的最新方法。
Selecting an appropriate response from many candidates given the utterances in a multi-turn dialogue is the key problem for a retrieval-based dialogue system. Existing work formalizes the task as matching between the utterances and a candidate and uses the cross-entropy loss in learning of the model. This paper applies contrastive learning to the problem by using the supervised contrastive loss. In this way, the learned representations of positive examples and representations of negative examples can be more distantly separated in the embedding space, and the performance of matching can be enhanced. We further develop a new method for supervised contrastive learning, referred to as two-level supervised contrastive learning, and employ the method in response selection in multi-turn dialogue. Our method exploits two techniques: sentence token shuffling (STS) and sentence re-ordering (SR) for supervised contrastive learning. Experimental results on three benchmark datasets demonstrate that the proposed method significantly outperforms the contrastive learning baseline and the state-of-the-art methods for the task.