论文标题
学习删除表示的对比目标
A Contrastive Objective for Learning Disentangled Representations
论文作者
论文摘要
对于敏感或不需要的属性不变的图像的学习表示对于许多任务包括偏见删除和跨域检索至关重要。在这里,我们的目标是学习为提供标签的域(敏感属性)不变的表示形式,同时对所有其他图像属性都提供了信息,这些属性是未标记的。我们提出了一种新的方法,提出了一个新的范围对比度目标,以确保不变表示。该目标限制了从同一域中绘制的负面图像对,该域可实施域的不变性,而标准对比度目标则没有。该域的目标本身不足,因为它具有捷径解决方案,从而导致功能抑制。我们通过结合重建约束,图像增强和初始化与预训练的权重来克服这个问题。我们的分析表明,增加的选择很重要,并且误导的增强选择会损害不变性和信息性目标。在一项广泛的评估中,我们的方法令人信服地表现出代表性不变性,代表性信息和培训速度的最先进。此外,我们发现在某些情况下,即使没有重建约束,我们的方法也可以取得出色的结果,从而导致更快,更有效的资源培训。
Learning representations of images that are invariant to sensitive or unwanted attributes is important for many tasks including bias removal and cross domain retrieval. Here, our objective is to learn representations that are invariant to the domain (sensitive attribute) for which labels are provided, while being informative over all other image attributes, which are unlabeled. We present a new approach, proposing a new domain-wise contrastive objective for ensuring invariant representations. This objective crucially restricts negative image pairs to be drawn from the same domain, which enforces domain invariance whereas the standard contrastive objective does not. This domain-wise objective is insufficient on its own as it suffers from shortcut solutions resulting in feature suppression. We overcome this issue by a combination of a reconstruction constraint, image augmentations and initialization with pre-trained weights. Our analysis shows that the choice of augmentations is important, and that a misguided choice of augmentations can harm the invariance and informativeness objectives. In an extensive evaluation, our method convincingly outperforms the state-of-the-art in terms of representation invariance, representation informativeness, and training speed. Furthermore, we find that in some cases our method can achieve excellent results even without the reconstruction constraint, leading to a much faster and resource efficient training.