论文标题
通过sndhorn差异学习深层的最佳嵌入
Learning Deep Optimal Embeddings with Sinkhorn Divergences
论文作者
论文摘要
深度度量学习算法旨在学习有效的嵌入空间,以保持输入数据之间的相似性关系。尽管这些算法在大量任务中取得了显着的性能增长,但它们也未能考虑并增加全面的相似性约束。因此,在嵌入空间中学习了亚最佳度量。而且,到目前为止;关于嘈杂标签的表现,很少有研究。在这里,我们通过设计一个新颖而有效的深层差异损失(DCDL)函数来学习歧视性深层嵌入空间的关注,该功能将每个类别之间的嵌入点的基本相似性分布(从而引入了班级差异)。在存在和没有噪声的情况下,我们在三个标准图像分类数据集和两个细粒图像识别数据集中的经验结果清楚地表明,在学习歧视性嵌入空间的同时,需要将这种类似的相似性关系以及传统算法结合在一起。
Deep Metric Learning algorithms aim to learn an efficient embedding space to preserve the similarity relationships among the input data. Whilst these algorithms have achieved significant performance gains across a wide plethora of tasks, they have also failed to consider and increase comprehensive similarity constraints; thus learning a sub-optimal metric in the embedding space. Moreover, up until now; there have been few studies with respect to their performance in the presence of noisy labels. Here, we address the concern of learning a discriminative deep embedding space by designing a novel, yet effective Deep Class-wise Discrepancy Loss (DCDL) function that segregates the underlying similarity distributions (thus introducing class-wise discrepancy) of the embedding points between each and every class. Our empirical results across three standard image classification datasets and two fine-grained image recognition datasets in the presence and absence of noise clearly demonstrate the need for incorporating such class-wise similarity relationships along with traditional algorithms while learning a discriminative embedding space.