论文标题

Fairlens:审核黑盒临床决策支持系统

FairLens: Auditing Black-box Clinical Decision Support Systems

论文作者

Panigutti, Cecilia, Perotti, Alan, Panisson, Andrè, Bajardi, Paolo, Pedreschi, Dino

论文摘要

算法决策的普遍应用正在引起人们对在医疗保健等关键环境中部署的AI系统中意外偏见的风险的担忧。对偏见模型的检测和缓解是一项非常微妙的任务,应通过谨慎处理和涉及域名专家的处理。在本文中,我们介绍了Fairlens,这是一种发现和解释偏见的方法。我们展示了我们的工具如何用于审核虚构的商业黑盒模型,该模型充当临床决策支持系统。在这种情况下,医疗机构专家可以在自己的历史数据上使用Fairlens来发现模型的偏见,然后再将其纳入临床决策流。 Fairlens首先根据年龄,种族,性别和保险等属性对可用的患者数据进行分层;然后,它评估了确定需要专家评估人员的患者的此类子组的模型性能。最后,Fairlens以最新的最新XAI(可解释的人工智能)技术为基础,解释了患者临床病史中哪些元素驱动选定子组中的模型错误。因此,Fairlens允许专家调查是否信任该模型,并聚焦于可能构成潜在公平问题的特定群体偏见。

The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源