论文标题
NT Xent损失上限
The NT-Xent loss upper bound
论文作者
论文摘要
自我监督的学习是深度表示学习中日益增长的范式,在低标记的数据制度中表现出了出色的概括能力和竞争性能。 SIMCLR框架提出了对比度表示学习的NT Xent损失。损失函数的目的是在采样正面之间最大化一致性,相似性。这份简短的论文得出并提出了损失和平均相似性的上限。但是,未提供对含义的分析,但我们强烈鼓励该领域的任何人进行此操作。
Self-supervised learning is a growing paradigm in deep representation learning, showing great generalization capabilities and competitive performance in low-labeled data regimes. The SimCLR framework proposes the NT-Xent loss for contrastive representation learning. The objective of the loss function is to maximize agreement, similarity, between sampled positive pairs. This short paper derives and proposes an upper bound for the loss and average similarity. An analysis of the implications is however not provided, but we strongly encourage anyone in the field to conduct this.