论文标题

转移学习方法,用于构建跨语言密集检索模型

Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models

论文作者

Nair, Suraj, Yang, Eugene, Lawrie, Dawn, Duh, Kevin, McNamee, Paul, Murray, Kenton, Mayfield, James, Oard, Douglas W.

论文摘要

基于变压器的模型(例如BERT)的出现导致了神经排名模型的兴起。这些模型提高了检索系统的有效性,远远超出了BM25等词汇学期匹配模型的有效性。尽管单语的检索任务受益于MARCO女士和神经体系结构的进步,但跨语言检索任务却落后于这些进步。本文介绍了Colbert-X,这是对使用XLM-Roberta(XLM-R)编码器来支持跨语言信息检索(CLIR)的Colbert多代表密集检索模型的概括。 Colbert-X可以通过两种方式进行培训。在零拍训练中,该系统在英语MS MARCO系列中进行了培训,并依靠XLM-R编码器进行跨语言映射。在Translate-Train中,该系统对MS MARCO英语查询进行了培训,并加上相关MS MARCO段落的机器翻译。临时文档文档排名任务的结果表明,这些训练有素的密集检索模型比传统的词汇Clir基线的实质性和统计学显着改善。

The advent of transformer-based models such as BERT has led to the rise of neural ranking models. These models have improved the effectiveness of retrieval systems well beyond that of lexical term matching models such as BM25. While monolingual retrieval tasks have benefited from large-scale training collections such as MS MARCO and advances in neural architectures, cross-language retrieval tasks have fallen behind these advancements. This paper introduces ColBERT-X, a generalization of the ColBERT multi-representation dense retrieval model that uses the XLM-RoBERTa (XLM-R) encoder to support cross-language information retrieval (CLIR). ColBERT-X can be trained in two ways. In zero-shot training, the system is trained on the English MS MARCO collection, relying on the XLM-R encoder for cross-language mappings. In translate-train, the system is trained on the MS MARCO English queries coupled with machine translations of the associated MS MARCO passages. Results on ad hoc document ranking tasks in several languages demonstrate substantial and statistically significant improvements of these trained dense retrieval models over traditional lexical CLIR baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源