论文标题
使用翻译作为数据增强的多语言转移学习
Multilingual Transfer Learning for QA Using Translation as Data Augmentation
论文作者
论文摘要
多语言问题回答的先前工作主要集中在使用大型多语言预训练的语言模型(LM)来执行零局的语言学习:培训有关英语的QA模型并在其他语言上进行测试。在这项工作中,我们探讨了通过使多语言嵌入在语义空间中更接近的策略来改善跨语性转移。我们的第一个策略通过机器翻译生成的数据增强了原始的英语培训数据。这导致了多种标记的质量质量质量质量质量质量质量配对的语料库,其比原始训练集大14倍。此外,我们提出了两种新颖的策略,即语言对抗性培训和语言仲裁框架,这些策略可显着改善(零资源)跨语性转移绩效,并导致LM嵌入语言变化较小的LM嵌入。从经验上讲,我们表明,在最近引入的多语言MLQA和TydiQA数据集上,提出的模型优于先前的零击基线。
Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages. In this work, we explore strategies that improve cross-lingual transfer by bringing the multilingual embeddings closer in the semantic space. Our first strategy augments the original English training data with machine translation-generated data. This results in a corpus of multilingual silver-labeled QA pairs that is 14 times larger than the original training set. In addition, we propose two novel strategies, language adversarial training and language arbitration framework, which significantly improve the (zero-resource) cross-lingual transfer performance and result in LM embeddings that are less language-variant. Empirically, we show that the proposed models outperform the previous zero-shot baseline on the recently introduced multilingual MLQA and TyDiQA datasets.