论文标题

S2SD:同时基于相似性的自我依据,用于深度度量学习

S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning

论文作者

Roth, Karsten, Milbich, Timo, Ommer, Björn, Cohen, Joseph Paul, Ghassemi, Marzyeh

论文摘要

深度度量学习(DML)通过学习概括嵌入空间,为视觉相似性和零拍应用提供了一个至关重要的工具,尽管DML的最新工作已经在培训目标中表现出强大的性能饱和度。但是,已知概括能力随嵌入空间维度的扩展。不幸的是,高维嵌入也为下游应用程序创造了更高的检索成本。为了解决这个问题,我们提出\ emph {同时基于相似性的自我依据(S2SD)。 S2SD从辅助,高维嵌入和功能空间中扩展了DML,以在培训期间利用互补环境,同时保留测试时间成本,并在培训时间上忽略不计。跨不同目标和标准基准的实验和消融表明,S2SD在Recce@1中可显着提高7%的改进,同时还设置了新的最新技术。可在https://github.com/mlforhealth/s2sd上找到代码。

Deep Metric Learning (DML) provides a crucial tool for visual similarity and zero-shot applications by learning generalizing embedding spaces, although recent work in DML has shown strong performance saturation across training objectives. However, generalization capacity is known to scale with the embedding space dimensionality. Unfortunately, high dimensional embeddings also create higher retrieval cost for downstream applications. To remedy this, we propose \emph{Simultaneous Similarity-based Self-distillation (S2SD). S2SD extends DML with knowledge distillation from auxiliary, high-dimensional embedding and feature spaces to leverage complementary context during training while retaining test-time cost and with negligible changes to the training time. Experiments and ablations across different objectives and standard benchmarks show S2SD offers notable improvements of up to 7% in Recall@1, while also setting a new state-of-the-art. Code available at https://github.com/MLforHealth/S2SD.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源