论文标题
持续的表示持续的表示持续关系提取的学习
Consistent Representation Learning for Continual Relation Extraction
论文作者
论文摘要
持续的关系提取(CRE)旨在在具有新关系的数据上持续训练模型,同时避免忘记旧的旧关系。以前的一些工作证明,在学习新关系时,存储一些典型的旧关系样本并重播它们可以有效地避免忘记。但是,这些基于内存的方法倾向于过度拟合内存样本,并且在不平衡数据集上的性能差。为了解决这些挑战,提出了一种一致的表示学习方法,该方法通过在重播记忆时采用对比度学习和知识蒸馏来保持关系嵌入的稳定性。具体而言,首先使用基于内存库的监督对比学习来训练每个新任务,以便模型可以有效地学习关系表示。然后,对比度重播是对记忆中的样本进行的,并使模型通过记忆知识蒸馏保留了历史关系的知识,以防止对旧任务的灾难性遗忘。提出的方法可以更好地学习一致的表示,以减轻有效的遗忘。对少数和塔克数据集的广泛实验表明,我们的方法显着超过了最先进的基准,并在不平衡的数据集上产生了强大的鲁棒性。
Continual relation extraction (CRE) aims to continuously train a model on data with new relations while avoiding forgetting old ones. Some previous work has proved that storing a few typical samples of old relations and replaying them when learning new relations can effectively avoid forgetting. However, these memory-based methods tend to overfit the memory samples and perform poorly on imbalanced datasets. To solve these challenges, a consistent representation learning method is proposed, which maintains the stability of the relation embedding by adopting contrastive learning and knowledge distillation when replaying memory. Specifically, supervised contrastive learning based on a memory bank is first used to train each new task so that the model can effectively learn the relation representation. Then, contrastive replay is conducted of the samples in memory and makes the model retain the knowledge of historical relations through memory knowledge distillation to prevent the catastrophic forgetting of the old task. The proposed method can better learn consistent representations to alleviate forgetting effectively. Extensive experiments on FewRel and TACRED datasets show that our method significantly outperforms state-of-the-art baselines and yield strong robustness on the imbalanced dataset.