论文标题
知识图完成具有预训练的多模式变压器和双胞胎负抽样
Knowledge Graph Completion with Pre-trained Multimodal Transformer and Twins Negative Sampling
论文作者
论文摘要
不可避免地,将世界知识描绘成结构三元组的知识图(kgs)是不可避免的。多模式知识图(MMKG)仍然存在此类问题。因此,知识图完成(KGC)对于预测现有KG中缺失的三元组至关重要。至于现有的kgc方法,基于嵌入的方法依靠手动设计来利用多模式信息,而基于芬太尼的方法在链接预测中并不优于基于嵌入的方法。为了解决这些问题,我们提出了一个Visualbert增强知识图完成模型(简称VBKGC)。 VBKGC可以为实体捕获深层融合的多模式信息,并将其集成到KGC模型中。此外,我们通过设计一种称为Twins Twins负抽样的新的负抽样策略来实现KGC模型的共同设计和负抽样。双胞胎阴性采样适用于多模式方案,可以对齐实体的不同嵌入。我们进行了广泛的实验,以显示VBKGC在链接预测任务上的出色表现,并进一步探索VBKGC。
Knowledge graphs (KGs) that modelings the world knowledge as structural triples are inevitably incomplete. Such problems still exist for multimodal knowledge graphs (MMKGs). Thus, knowledge graph completion (KGC) is of great importance to predict the missing triples in the existing KGs. As for the existing KGC methods, embedding-based methods rely on manual design to leverage multimodal information while finetune-based approaches are not superior to embedding-based methods in link prediction. To address these problems, we propose a VisualBERT-enhanced Knowledge Graph Completion model (VBKGC for short). VBKGC could capture deeply fused multimodal information for entities and integrate them into the KGC model. Besides, we achieve the co-design of the KGC model and negative sampling by designing a new negative sampling strategy called twins negative sampling. Twins negative sampling is suitable for multimodal scenarios and could align different embeddings for entities. We conduct extensive experiments to show the outstanding performance of VBKGC on the link prediction task and make further exploration of VBKGC.