论文标题
低资源语料库采矿的更好质量估计
Better Quality Estimation for Low Resource Corpus Mining
论文作者
论文摘要
质量估计(QE)模型有可能改变我们评估甚至训练机器翻译模型的方式。但是,这些模型仍然缺乏实现一般采用的鲁棒性。我们显示,在平行语料库采矿(PCM)设置中进行测试时,最先进的QE模型由于缺乏对室外示例的鲁棒性而表现出意外的不良情况。我们提出了多任务培训,数据增强和对比度学习的结合,以实现更好,更健壮的量化宽松效果。我们表明,在平行语料库挖掘设置中测试时,我们的方法在MLQE挑战和量化宽松模型的鲁棒性中显着提高了量化量化性能。我们将PCM中的精度提高了0.80以上,这与最新的PCM方法相提并论,该方法使用数百万个句子对训练其模型。相比之下,我们使用的数据少了一千倍,总共7K并行句子,并提出了一种新型的低资源PCM方法。
Quality Estimation (QE) models have the potential to change how we evaluate and maybe even train machine translation models. However, these models still lack the robustness to achieve general adoption. We show that State-of-the-art QE models, when tested in a Parallel Corpus Mining (PCM) setting, perform unexpectedly bad due to a lack of robustness to out-of-domain examples. We propose a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE performance. We show that our method improves QE performance significantly in the MLQE challenge and the robustness of QE models when tested in the Parallel Corpus Mining setup. We increase the accuracy in PCM by more than 0.80, making it on par with state-of-the-art PCM methods that use millions of sentence pairs to train their models. In comparison, we use a thousand times less data, 7K parallel sentences in total, and propose a novel low resource PCM method.