论文标题
分布强劲的公平主要组件通过地球下降
Distributionally Robust Fair Principal Components via Geodesic Descents
论文作者
论文摘要
主成分分析是现代机器学习管道中一种简单但有用的降低技术。在大学录取,医疗保健和信用批准之类的结果领域中,必须考虑新兴标准,例如公平性和鲁棒性。在本文中,我们提出了一个主体组件分析的分布强大的优化问题,该问题内部地内部了目标函数中的公平标准。因此,学到的投影可以平衡总重建误差与亚组之间的重建误差差距之间的权衡,该子组以最小分布在基于时刻的歧义集中在所有分布中采用。可以通过riemannian次线性收敛速率的黎曼亚级别下降算法有效地解决了在stiefel歧管上产生的优化问题。我们对实际数据集的实验结果表明,我们提出的方法比最先进的基准的优点。
Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to take into account emerging criteria such as the fairness and the robustness of the learned projection. In this paper, we propose a distributionally robust optimization problem for principal component analysis which internalizes a fairness criterion in the objective function. The learned projection thus balances the trade-off between the total reconstruction error and the reconstruction error gap between subgroups, taken in the min-max sense over all distributions in a moment-based ambiguity set. The resulting optimization problem over the Stiefel manifold can be efficiently solved by a Riemannian subgradient descent algorithm with a sub-linear convergence rate. Our experimental results on real-world datasets show the merits of our proposed method over state-of-the-art baselines.