论文标题
分享深度度量学习中概括的问题
Sharing Matters for Generalization in Deep Metric Learning
论文作者
论文摘要
了解图像之间的相似性构成了许多视觉任务的基础。常见的范式是判别度量学习,它寻求将不同培训类别分开的嵌入。但是,主要的挑战是学习一个不仅概括从训练到新颖的测试样本的度量标准。它还应该转移到不同的对象类。那么,歧视性范式错过了什么补充信息?除了找到在班级之间分开的特征外,我们还需要它们可能发生在新型类别中,如果它们在培训课程中共享,则可以指示它们。这项工作调查了如何学习此类特征,而无需额外的注释或培训数据。通过将我们的方法作为一种新颖的三胞胎抽样策略,可以轻松地应用于最近的排名损失框架。实验表明,与基础网络体系结构和特定排名损失无关,我们的方法显着提高了深度度量学习的性能,从而使新的标准基准数据集中的最新结果。可以在此处找到初步的早期访问页面:https://ieeexplore.ieee.org/document/9141449
Learning the similarity between images constitutes the foundation for numerous vision tasks. The common paradigm is discriminative metric learning, which seeks an embedding that separates different training classes. However, the main challenge is to learn a metric that not only generalizes from training to novel, but related, test samples. It should also transfer to different object classes. So what complementary information is missed by the discriminative paradigm? Besides finding characteristics that separate between classes, we also need them to likely occur in novel categories, which is indicated if they are shared across training classes. This work investigates how to learn such characteristics without the need for extra annotations or training data. By formulating our approach as a novel triplet sampling strategy, it can be easily applied on top of recent ranking loss frameworks. Experiments show that, independent of the underlying network architecture and the specific ranking loss, our approach significantly improves performance in deep metric learning, leading to new the state-of-the-art results on various standard benchmark datasets. Preliminary early access page can be found here: https://ieeexplore.ieee.org/document/9141449