论文标题
在成像中精炼自我监督的学习:超越线性度量
Refining Self-Supervised Learning in Imaging: Beyond Linear Metric
论文作者
论文摘要
我们在本文中介绍了一个新的统计观点,利用了JACCARD相似性度量,作为基于措施的指标,可以有效地引起非线性特征,从而在失去自我监督的对比度学习中。具体而言,我们提出的指标可以解释为从所谓的潜在表示中学到的两个适应性预测之间的依赖度量。这与传统的对比学习模型中的余弦相似性度量相反,该模型解释了相关信息。据我们所知,这种有效的非线性融合信息嵌入了Jaccard相似性中,这是对自学学习的新颖性,并具有令人鼓舞的结果。将所提出的方法与三个图像数据集上的两种最先进的自我监督对比学习方法进行了比较。我们不仅在当前的ML问题中证明了它的适用性,而且还提高了其性能和训练效率。
We introduce in this paper a new statistical perspective, exploiting the Jaccard similarity metric, as a measure-based metric to effectively invoke non-linear features in the loss of self-supervised contrastive learning. Specifically, our proposed metric may be interpreted as a dependence measure between two adapted projections learned from the so-called latent representations. This is in contrast to the cosine similarity measure in the conventional contrastive learning model, which accounts for correlation information. To the best of our knowledge, this effectively non-linearly fused information embedded in the Jaccard similarity, is novel to self-supervision learning with promising results. The proposed approach is compared to two state-of-the-art self-supervised contrastive learning methods on three image datasets. We not only demonstrate its amenable applicability in current ML problems, but also its improved performance and training efficiency.