论文标题
执行公平是否减轻亚种群转移引起的偏见?
Does enforcing fairness mitigate biases caused by subpopulation shift?
论文作者
论文摘要
算法偏见的许多实例是由亚群移动引起的。例如,ML模型在培训数据中的人群群体中的表现通常更糟。在本文中,我们研究训练期间执行算法公平性是否可以改善\ emph {target域}中训练的模型的性能。一方面,我们构想了实施公平性不会改善目标域的性能的情况。实际上,它甚至可能损害性能。另一方面,我们得出了必要和充分的条件,在这些条件下,执行算法公平性导致目标域中的贝叶斯模型。我们还说明了我们在模拟和真实数据中理论结果的实际含义。
Many instances of algorithmic bias are caused by subpopulation shifts. For example, ML models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we study whether enforcing algorithmic fairness during training improves the performance of the trained model in the \emph{target domain}. On one hand, we conceive scenarios in which enforcing fairness does not improve performance in the target domain. In fact, it may even harm performance. On the other hand, we derive necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain. We also illustrate the practical implications of our theoretical results in simulations and on real data.