论文标题
实体一致性的深入强化学习
Deep Reinforcement Learning for Entity Alignment
论文作者
论文摘要
在最近的实体比对(EA)研究中,基于嵌入的方法引起了越来越多的关注。尽管他们可以提供巨大的承诺,但仍然存在一些局限性。最值得注意的是,他们基于余弦相似性来识别对齐的实体,而忽略了嵌入本身的语义。此外,这些方法是短视的,可以启发性地选择最接近的实体作为目标,并允许多个实体匹配同一候选人。为了解决这些局限性,我们将实体对准建模为一项顺序决策任务,其中代理会根据其表示向量对两个实体进行顺序决定是否匹配或不匹配两个实体。拟议的加固学习(RL)基于基于的实体对齐框架可以灵活地适应大多数基于嵌入的EA方法。实验结果表明,它始终如一地提高了几种最先进方法的性能,命中率@1的最大提高了31.1%。
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies. Although great promise they can offer, there are still several limitations. The most notable is that they identify the aligned entities based on cosine similarity, ignoring the semantics underlying the embeddings themselves. Furthermore, these methods are shortsighted, heuristically selecting the closest entity as the target and allowing multiple entities to match the same candidate. To address these limitations, we model entity alignment as a sequential decision-making task, in which an agent sequentially decides whether two entities are matched or mismatched based on their representation vectors. The proposed reinforcement learning (RL)-based entity alignment framework can be flexibly adapted to most embedding-based EA methods. The experimental results demonstrate that it consistently advances the performance of several state-of-the-art methods, with a maximum improvement of 31.1% on Hits@1.