论文标题

无偏见的视觉表示

Unsupervised Learning of Unbiased Visual Representations

论文作者

Barbano, Carlo Alberto, Tartaglione, Enzo, Grangetto, Marco

论文摘要

深层神经网络通常在存在数据集偏见的情况下努力学习强大的表示形式,从而导致对无偏数据集的次优概括。之所以出现这种限制,是因为模型在很大程度上取决于外围和混杂因素,在训练过程中无意间获得。解决此问题的现有方法通常涉及对偏见属性的明确监督或对偏见的先验知识的依赖。在这项研究中,我们解决了具有挑战性的情况,在这些情况下,没有明确的偏见注释,并且对其性质没有先验的知识。我们提出了一个完全无监督的偏见框架,采用三个关键步骤:首先,利用固有的趋势学习恶性偏见以获取偏见的模型;接下来,采用伪标记的过程获得偏见标签;最后,应用尖端监督的偏见技术来实现公正的模型。此外,我们引入了一个理论框架,用于评估模型偏见,并对偏见如何影响神经网络培训进行详细分析。合成和现实世界数据集的实验结果证明了我们方法的有效性,在各种环境中展示了最先进的性能,偶尔超过了完全监督的偏见方法。

Deep neural networks often struggle to learn robust representations in the presence of dataset biases, leading to suboptimal generalization on unbiased datasets. This limitation arises because the models heavily depend on peripheral and confounding factors, inadvertently acquired during training. Existing approaches to address this problem typically involve explicit supervision of bias attributes or reliance on prior knowledge about the biases. In this study, we address the challenging scenario where no explicit annotations of bias are available, and there's no prior knowledge about its nature. We present a fully unsupervised debiasing framework with three key steps: firstly, leveraging the inherent tendency to learn malignant biases to acquire a bias-capturing model; next, employing a pseudo-labeling process to obtain bias labels; and finally, applying cutting-edge supervised debiasing techniques to achieve an unbiased model. Additionally, we introduce a theoretical framework for evaluating model biasedness and conduct a detailed analysis of how biases impact neural network training. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our method, showcasing state-of-the-art performance in various settings, occasionally surpassing fully supervised debiasing approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源