论文标题
你的公平有多稳健?在看不见的分配变化下评估和维持公平性
How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
论文作者
论文摘要
近年来,人们对深度学习的公平性提出了越来越多的担忧。现有的公平感知机器学习方法主要集中于分布数据的公平性。但是,在实际应用程序中,通常在培训数据和测试数据之间进行分配变化是很常见的。在本文中,我们首先表明,现有方法所获得的公平性很容易被轻微的分配变化打破。为了解决这个问题,我们提出了一种新颖的公平学习方法,称为曲率匹配(cumA),可以实现可概括的公平性,可概括地对具有未知分布转移的看不见的域。具体而言,CUMA通过与两组的损耗曲率分布相匹配,从而实施模型对多数和少数群体具有相似的概括能力。我们在三个流行的公平数据集上评估了我们的方法。与现有方法相比,CUMA在看不见的分布变化下实现了卓越的公平性,而无需牺牲整体准确性或分布公平。
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.