论文标题
实体一致性的多模式对比度表示学习
Multi-modal Contrastive Representation Learning for Entity Alignment
论文作者
论文摘要
多模式实体对准旨在确定两个不同的多模式知识图之间的等效实体,这些实体由与实体相关的结构三元组和图像组成。大多数以前的作品都集中在如何利用和编码不同模式中的信息,而由于模态异质性,它并不是要利用实体对齐中的多模式知识。在本文中,我们提出了基于多模式对比度学习的实体比对模型McLea,以获得多模式实体比对的有效联合表示。与以前的工作不同,麦克莱尔(McLea)考虑了面向任务的模态,并为每个实体表示形式建模模式间关系。特别是,麦克林首先从多种模式中学习多个单独的表示,然后进行对比学习以共同对模式内和模式间相互作用进行建模。广泛的实验结果表明,在受监督和无监督的设置下,McLea在公共数据集上优于公共数据集上的最先进的基线。
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities. Most previous works focus on how to utilize and encode information from different modalities, while it is not trivial to leverage multi-modal knowledge in entity alignment because of the modality heterogeneity. In this paper, we propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model, to obtain effective joint representations for multi-modal entity alignment. Different from previous works, MCLEA considers task-oriented modality and models the inter-modal relationships for each entity representation. In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions. Extensive experimental results show that MCLEA outperforms state-of-the-art baselines on public datasets under both supervised and unsupervised settings.