论文标题

评估多语言对话代理模型中的跨语性转移学习方法

Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual Conversational Agent Models

论文作者

Tan, Lizhen, Golovneva, Olga

论文摘要

随着语音助手设备的普及,随着爆炸的爆炸,人们对以其他国家和语言的用户群体的方式越来越感兴趣。但是,为了为特定用户群体提供最高的准确性和最佳性能,大多数现有的语音助手模型都是针对每个区域或语言的单独开发的,这需要努力的线性投资。在本文中,我们提出了一个自然语言理解(NLU)模型的一般多语言模型框架,该框架可以帮助更快地引导新语言模型并减少分别开发每种语言所需的精力。我们探索不同的深度学习体系结构如何影响多语言NLU模型性能。我们的实验结果表明,与跨语言特定测试数据的单语模型相比,这些多语言模型可以达到相同或更好的性能,同时需要更少的精力来创建功能和模型维护。

With the recent explosion in popularity of voice assistant devices, there is a growing interest in making them available to user populations in additional countries and languages. However, to provide the highest accuracy and best performance for specific user populations, most existing voice assistant models are developed individually for each region or language, which requires linear investment of effort. In this paper, we propose a general multilingual model framework for Natural Language Understanding (NLU) models, which can help bootstrap new language models faster and reduce the amount of effort required to develop each language separately. We explore how different deep learning architectures affect multilingual NLU model performance. Our experimental results show that these multilingual models can reach same or better performance compared to monolingual models across language-specific test data while require less effort in creating features and model maintenance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源