论文标题

桥接训练缝隙,以获取致密短语

Bridging the Training-Inference Gap for Dense Phrase Retrieval

论文作者

Kim, Gyuwan, Lee, Jinhyuk, Oguz, Barlas, Xiong, Wenhan, Zhang, Yizhe, Mehdad, Yashar, Wang, William Yang

论文摘要

建筑密集的检索器需要一系列标准程序,包括培训和验证神经模型以及为有效搜索创建索引。但是,这些程序通常会错位,因为培训目标并不能完全反映推理时的检索情况。在本文中,我们探讨了如何减少训练与推理之间的差距,重点是致密的短语检索(Lee等,2021),其中在推论时索引了数十亿个表示。由于用大规模指数验证每个密集的检索器实际上是不可行的,因此我们提出了一种使用整个语料库的一小部分来验证密集回收者的有效方法。这使我们能够验证各种培训策略,包括统一对比度损失条款以及使用硬否负面的词组检索,这在很大程度上降低了培训推动差异。结果,我们将TOP-1短语检索精度提高了2至3点,而前20个通过检索准确性则提高了2〜4点,以进行开放域的问题回答。我们的工作敦促通过仔细考虑训练和通过有效验证进行推理,同时将短语检索作为致密检索的一般解决方案,以仔细考虑训练和推断。

Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search. However, these procedures are often misaligned in that training objectives do not exactly reflect the retrieval scenario at inference time. In this paper, we explore how the gap between training and inference in dense retrieval can be reduced, focusing on dense phrase retrieval (Lee et al., 2021) where billions of representations are indexed at inference. Since validating every dense retriever with a large-scale index is practically infeasible, we propose an efficient way of validating dense retrievers using a small subset of the entire corpus. This allows us to validate various training strategies including unifying contrastive loss terms and using hard negatives for phrase retrieval, which largely reduces the training-inference discrepancy. As a result, we improve top-1 phrase retrieval accuracy by 2~3 points and top-20 passage retrieval accuracy by 2~4 points for open-domain question answering. Our work urges modeling dense retrievers with careful consideration of training and inference via efficient validation while advancing phrase retrieval as a general solution for dense retrieval.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源