论文标题
可撤销的深度加固学习,具有亲和力正规化,以进行异常表图匹配
Revocable Deep Reinforcement Learning with Affinity Regularization for Outlier-Robust Graph Matching
论文作者
论文摘要
图形匹配(GM)一直是各个领域的基础,包括计算机视觉和模式识别。尽管最近的进展令人印象深刻,但现有的深入GM方法在处理离群值方面通常遇到明显的困难,而在实践中无处不在。我们提出了一种基于深钢筋学习的方法RGM,其顺序节点匹配方案自然符合选择性近距离匹配与异常值的策略。设计了一个可撤销的动作框架,以提高代理商对复杂受约束的GM的灵活性。此外,在存在异常值的情况下,我们提出了一种二次近似技术,以使亲和力得分正常。因此,当亲和力得分停止增长时,代理可以及时完成匹配,否则为额外的参数,即需要嵌入数量以避免匹配异常值。在本文中,我们专注于在GM的最一般形式下学习后端求解器:Lawler's QAP,其输入是亲和力矩阵。特别是,我们的方法还可以促进使用这种输入的现有GM方法。多个现实世界数据集的实验证明了其在准确性和鲁棒性方面的性能。
Graph matching (GM) has been a building block in various areas including computer vision and pattern recognition. Despite recent impressive progress, existing deep GM methods often have obvious difficulty in handling outliers, which are ubiquitous in practice. We propose a deep reinforcement learning based approach RGM, whose sequential node matching scheme naturally fits the strategy for selective inlier matching against outliers. A revocable action framework is devised to improve the agent's flexibility against the complex constrained GM. Moreover, we propose a quadratic approximation technique to regularize the affinity score, in the presence of outliers. As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i.e. the number of inliers is needed to avoid matching outliers. In this paper, we focus on learning the back-end solver under the most general form of GM: the Lawler's QAP, whose input is the affinity matrix. Especially, our approach can also boost existing GM methods that use such input. Experiments on multiple real-world datasets demonstrate its performance regarding both accuracy and robustness.