论文标题
通过释义进行预训练
Pre-training via Paraphrasing
论文作者
论文摘要
我们介绍了Marge,这是一个预先训练的序列到序列模型,该模型具有无监督的多语言备忘录目标。 Marge提供了主要的蒙版语言建模范式的替代方法,我们可以通过检索一组相关文本(以许多语言)为条件来自我意识到重建目标文本的重建,以最大程度地提高产生原始原始文本的可能性。我们表明,只有一个随机初始化,可以共同学习进行检索和重建。该目标噪声噪声概述了释义,翻译,多文章摘要和信息检索的各个方面,从而可以在多个任务上进行强劲的零拍摄性能。例如,在没有其他特定任务的培训的情况下,我们可以达到35.8的BLEU分数用于文档翻译。我们进一步表明,微调在许多语言的一系列歧视性和生成性任务上提供了强劲的表现,这使Marge成为迄今为止最适用的预训练方法。
We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual multi-document paraphrasing objective. MARGE provides an alternative to the dominant masked language modeling paradigm, where we self-supervise the reconstruction of target text by retrieving a set of related texts (in many languages) and conditioning on them to maximize the likelihood of generating the original. We show it is possible to jointly learn to do retrieval and reconstruction, given only a random initialization. The objective noisily captures aspects of paraphrase, translation, multi-document summarization, and information retrieval, allowing for strong zero-shot performance on several tasks. For example, with no additional task-specific training we achieve BLEU scores of up to 35.8 for document translation. We further show that fine-tuning gives strong performance on a range of discriminative and generative tasks in many languages, making MARGE the most generally applicable pre-training method to date.