论文标题
以任务为导向的对话理解中的预认识:未来上下文的后正规化
Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context
论文作者
论文摘要
在最近的研究中,以任务为导向的对话系统变得非常受欢迎。对话理解被广泛用于理解用户在任务对话系统中的意图,情感和对话状态。此类歧视任务的大多数作品仅对当前查询或历史对话进行建模。即使在某些工作中,整个对话流程都是建模的,它也不适用于现实世界中面向任务的对话,因为在这种情况下,未来的上下文不可见。在本文中,我们建议通过后正规化方法共同对历史和未来信息进行建模。更具体地说,通过将当前的话语和过去的上下文建模为先验,以及整个对话流作为后部,我们优化了这些分布之间的KL距离,以在训练期间正规化我们的模型。并且只有历史信息用于推断。在两个对话数据集上进行的广泛实验验证了我们提出的方法的有效性,与所有基线模型相比,取得了卓越的结果。
Task-oriented dialogue systems have become overwhelmingly popular in recent researches. Dialogue understanding is widely used to comprehend users' intent, emotion and dialogue state in task-oriented dialogue systems. Most previous works on such discriminative tasks only models current query or historical conversations. Even if in some work the entire dialogue flow was modeled, it is not suitable for the real-world task-oriented conversations as the future contexts are not visible in such cases. In this paper, we propose to jointly model historical and future information through the posterior regularization method. More specifically, by modeling the current utterance and past contexts as prior, and the entire dialogue flow as posterior, we optimize the KL distance between these distributions to regularize our model during training. And only historical information is used for inference. Extensive experiments on two dialogue datasets validate the effectiveness of our proposed method, achieving superior results compared with all baseline models.