论文标题

ABCINML:机器学习应用中的预期偏差校正

ABCinML: Anticipatory Bias Correction in Machine Learning Applications

论文作者

Almuzaini, Abdulaziz A., Bhatt, Chidansh A., Pennock, David M., Singh, Vivek K.

论文摘要

经过训练并永远部署的静态机器学习模型的理想化是不切实际的。随着输入分布的变化,该模型不仅会失去准确性,因此减少对受保护类别的偏见的任何约束都可能无法按预期工作。因此,研究人员已经开始探索随着时间的推移维持算法公平性的方法。一项工作的重点是动态学习:每批次之后进行重新训练,另一个工作涉及强大的学习,该学习试图使算法与未来所有可能的变化进行鲁棒性。动态学习旨在减少偏见发生后,并且鲁棒的学习通常会产生(过度)保守的模型。我们提出了一种预期的动态学习方法,用于纠正算法在发生偏见之前减轻算法。具体而言,我们利用有关下一个周期中人口亚组(例如,男性和女性申请人的相对比率)的相对分布的预期,以确定正确的参数,以实现重要性权衡方法。对多个现实世界数据集实验的结果表明,这种方法有望预期偏差校正。

The idealization of a static machine-learned model, trained once and deployed forever, is not practical. As input distributions change over time, the model will not only lose accuracy, any constraints to reduce bias against a protected class may fail to work as intended. Thus, researchers have begun to explore ways to maintain algorithmic fairness over time. One line of work focuses on dynamic learning: retraining after each batch, and the other on robust learning which tries to make algorithms robust against all possible future changes. Dynamic learning seeks to reduce biases soon after they have occurred and robust learning often yields (overly) conservative models. We propose an anticipatory dynamic learning approach for correcting the algorithm to mitigate bias before it occurs. Specifically, we make use of anticipations regarding the relative distributions of population subgroups (e.g., relative ratios of male and female applicants) in the next cycle to identify the right parameters for an importance weighing fairness approach. Results from experiments over multiple real-world datasets suggest that this approach has promise for anticipatory bias correction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源