论文标题
知识库完成:基线击退(再次)
Knowledge Base Completion: Baseline strikes back (Again)
论文作者
论文摘要
知识库完成(KBC)最近是一个非常活跃的领域。最近的一些KBCPAPER提出了建筑变化,新的培训方法甚至新的配方。 KBC系统通常在标准基准数据集上评估:FB15K,FB15K-237,WN18,WN18RR和Yago3-10。大多数现有方法在这些数据集中为每个正实例训练少量的负样本,以节省计算成本。本文讨论了最近的发展如何使我们可以使用所有可用的负面样本进行培训。我们表明,使用所有可用的负样本培训时,复杂的复合物在所有数据集上都几乎最先进的性能。我们称此方法为复杂V2。我们还强调了最近在文献中提出的各种乘法KBC方法如何受益于这种训练制度,并且在大多数数据集上的性能方面都无法区分。鉴于这些发现,我们的工作要求重新评估其个人价值。
Knowledge Base Completion (KBC) has been a very active area lately. Several recent KBCpapers propose architectural changes, new training methods, or even new formulations. KBC systems are usually evaluated on standard benchmark datasets: FB15k, FB15k-237, WN18, WN18RR, and Yago3-10. Most existing methods train with a small number of negative samples for each positive instance in these datasets to save computational costs. This paper discusses how recent developments allow us to use all available negative samples for training. We show that Complex, when trained using all available negative samples, gives near state-of-the-art performance on all the datasets. We call this approach COMPLEX-V2. We also highlight how various multiplicative KBC methods, recently proposed in the literature, benefit from this train-ing regime and become indistinguishable in terms of performance on most datasets. Our work calls for a reassessment of their individual value, in light of these findings.