论文标题
有监督的度量学习以通过上下文相似性优化排名进行检索
Supervised Metric Learning to Rank for Retrieval via Contextual Similarity Optimization
论文作者
论文摘要
对于图像检索的公制学习方法,人们对此有广泛的兴趣。许多度量学习损失的功能集中在学习正确的培训样本排名,但强烈的语义上不一致的标签过于拟合,并且需要大量数据。为了解决这些缺点,我们提出了一种新的公制学习方法,称为上下文损失,除了余弦相似性之外,还优化了上下文相似性。我们的上下文损失隐含地在邻居之间实施语义一致性,同时融合了正确的排名。我们从经验上表明,提出的损失对标记噪声更为强大,即使固定了大部分火车数据,也不太容易过度适应。广泛的实验表明,我们的方法在四个图像检索基准和多个不同的评估设置中实现了新的最新最先进。代码可在以下网址找到:https://github.com/chris210634/metric-learning-using-contextual-simerity
There is extensive interest in metric learning methods for image retrieval. Many metric learning loss functions focus on learning a correct ranking of training samples, but strongly overfit semantically inconsistent labels and require a large amount of data. To address these shortcomings, we propose a new metric learning method, called contextual loss, which optimizes contextual similarity in addition to cosine similarity. Our contextual loss implicitly enforces semantic consistency among neighbors while converging to the correct ranking. We empirically show that the proposed loss is more robust to label noise, and is less prone to overfitting even when a large portion of train data is withheld. Extensive experiments demonstrate that our method achieves a new state-of-the-art across four image retrieval benchmarks and multiple different evaluation settings. Code is available at: https://github.com/Chris210634/metric-learning-using-contextual-similarity