论文标题

在以任务为导向的对话系统中,自然语言生成的持续学习

Continual Learning for Natural Language Generation in Task-oriented Dialog Systems

论文作者

Mi, Fei, Chen, Liangwei, Zhao, Mengjie, Huang, Minlie, Faltings, Boi

论文摘要

自然语言生成(NLG)是面向任务的对话系统的重要组成部分。尽管NLG神经方法最近取得了成功,但它们通常是针对特定领域的离线方式开发的。为了更好地拟合新数据进入流中的现实生活应用程序,我们在“持续学习”设置中研究NLG,以逐步将其知识扩展到新的领域或功能。这一目标的主要挑战是灾难性的遗忘,这意味着不断训练的模型往往会忘记以前学到的知识。为此,我们提出了一种称为Arper(自适应正规的优先示例重播)的方法,通过重播优先的历史示例,以及基于弹性压缩巩固的自适应正则化技术。在多WOZ-2.0上进行了不断学习新领域和意图的广泛实验,以使用广泛的技术进行基准测试。经验结果表明,ARPER通过有效缓解有害的灾难性遗忘问题来显着优于其他方法。

Natural language generation (NLG) is an essential component of task-oriented dialog systems. Despite the recent success of neural approaches for NLG, they are typically developed in an offline manner for particular domains. To better fit real-life applications where new data come in a stream, we study NLG in a "continual learning" setting to expand its knowledge to new domains or functionalities incrementally. The major challenge towards this goal is catastrophic forgetting, meaning that a continually trained model tends to forget the knowledge it has learned before. To this end, we propose a method called ARPER (Adaptively Regularized Prioritized Exemplar Replay) by replaying prioritized historical exemplars, together with an adaptive regularization technique based on ElasticWeight Consolidation. Extensive experiments to continually learn new domains and intents are conducted on MultiWoZ-2.0 to benchmark ARPER with a wide range of techniques. Empirical results demonstrate that ARPER significantly outperforms other methods by effectively mitigating the detrimental catastrophic forgetting issue.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源