论文标题

使用域和任务意识参数化的多域口语理解

Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware Parameterization

论文作者

Qin, Libo, Ni, Minheng, Zhang, Yue, Che, Wanxiang, Li, Yangming, Liu, Ting

论文摘要

口语理解已被视为一个有监督的学习问题,每个领域都有一组培训数据。但是,每个域的注释数据在财务上既昂贵又不可观,因此我们应该在所有域中充分利用信息。一种现有的方法通过使用共享参数进行多域学习来解决该问题,以跨领域的联合培训。我们建议通过使用域特异性和特定于任务的模型参数来改善知识学习和转移来改善此方法的参数化。在5个域上进行的实验表明,我们的模型对多域SLU更有效,并获得最佳结果。此外,我们通过将最佳模型的表现优于12.4 \%,而在适应新的域(几乎没有数据)时,我们的传递性均优于12.4 \%。

Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain. However, annotating data for each domain is both financially costly and non-scalable so we should fully utilize information across all domains. One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains. We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters to improve knowledge learning and transfer. Experiments on 5 domains show that our model is more effective for multi-domain SLU and obtain the best results. In addition, we show its transferability by outperforming the prior best model by 12.4\% when adapting to a new domain with little data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源