论文标题

连续更新的质量检查的插件适应

Plug-and-Play Adaptation for Continuously-updated QA

论文作者

Lee, Kyungjae, Han, Wookje, Hwang, Seung-won, Lee, Hwaran, Park, Joonsuk, Lee, Sang-Woo

论文摘要

语言模型(LMS)作为隐式知识库(KB)表现出巨大的潜力。为了实际使用,LMS中的知识需要定期更新。但是,现有的评估LMS功效的任务没有充分考虑多个大规模更新。为此,我们首先提出了一项新颖的任务 - 更连续的QA(CUQA) - 在其中对LMS进行了多次大规模更新,并且在添加和更新知识的同时,在保留现有知识的同时,衡量了性能。然后,我们使用有效处理更新的插件模块介绍LMS。在ZSRE QA和NQ数据集上进行的实验表明,我们的方法的表现优于现有方法。我们发现,与微调基线相比,在更新/忘记比率方面,我们的方法更有效。

Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源