论文标题

Fairprune:通过修剪皮肤病学诊断来实现公平性

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

论文作者

Wu, Yawen, Zeng, Dewen, Xu, Xiaowei, Shi, Yiyu, Hu, Jingtong

论文摘要

许多作品表明,基于深度学习的医学图像分类模型可以表现出对种族,性别和年龄等人口统计学属性的偏见。现有的偏见缓解方法主要集中于学习依据的模型,这可能不一定保证所有敏感信息都可以被删除,并且通常在特权和无私人组中都具有相当大的准确性降级。为了解决这个问题,我们提出了一种通过修剪来实现公平性的方法。通常,修剪用于减小模型大小以提高推理。但是,我们表明修剪也可能是实现公平性的强大工具。我们的观察结果是,在修剪过程中,模型中的每个参数对于不同组的准确性都有不同的重要性。通过基于这种重要性差异来修剪参数,我们可以降低特权组和无私人组之间的准确性差异,以提高公平性,而无需大量准确性。为此,我们使用预训练模型的参数的第二个导数来量化每个参数相对于每个组的模型精度的重要性。对两个皮肤病变诊断数据集进行了多个敏感属性的实验表明,我们的方法可以极大地提高公平性,同时保持两组的平均准确性。

Many works have shown that deep learning-based medical image classification models can exhibit bias toward certain demographic attributes like race, gender, and age. Existing bias mitigation methods primarily focus on learning debiased models, which may not necessarily guarantee all sensitive information can be removed and usually comes with considerable accuracy degradation on both privileged and unprivileged groups. To tackle this issue, we propose a method, FairPrune, that achieves fairness by pruning. Conventionally, pruning is used to reduce the model size for efficient inference. However, we show that pruning can also be a powerful tool to achieve fairness. Our observation is that during pruning, each parameter in the model has different importance for different groups' accuracy. By pruning the parameters based on this importance difference, we can reduce the accuracy difference between the privileged group and the unprivileged group to improve fairness without a large accuracy drop. To this end, we use the second derivative of the parameters of a pre-trained model to quantify the importance of each parameter with respect to the model accuracy for each group. Experiments on two skin lesion diagnosis datasets over multiple sensitive attributes demonstrate that our method can greatly improve fairness while keeping the average accuracy of both groups as high as possible.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源