论文标题
重新信息:通过加强学习的路径进行上下文化的链接预测
ReInform: Selecting paths with reinforcement learning for contextualized link prediction
论文作者
论文摘要
我们建议使用强化学习来通过提供最有用的路径来告知基于变压器的上下文化链接预测模型。这与以前的方法相反,即使用强化学习(RL)直接搜索答案,或者基于其预测基于有限或随机选择的上下文。我们在WN18RR和FB15K-237上的实验表明,上下文化的链接预测模型始终优于基于RL的答案搜索,并且可以通过将RL与链接预测模型相结合来获得其他改进(高达13.5%MRR)。 RL代理的Pytorch实现可从https://github.com/marina-p/reinform获得
We propose to use reinforcement learning to inform transformer-based contextualized link prediction models by providing paths that are most useful for predicting the correct answer. This is in contrast to previous approaches, that either used reinforcement learning (RL) to directly search for the answer, or based their prediction on limited or randomly selected context. Our experiments on WN18RR and FB15k-237 show that contextualized link prediction models consistently outperform RL-based answer search, and that additional improvements (of up to 13.5% MRR) can be gained by combining RL with a link prediction model. The PyTorch implementation of the RL agent is available at https://github.com/marina-sp/reinform