论文标题
图像分析中对机器学习算法偏见的识别和缓解调查
A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
论文作者
论文摘要
机器学习中算法偏见的问题近年来引起了很多关注,因为它的具体和潜在的危险含义。偏见也可以改变基于高维输入(例如图像)的现代工业和安全关键应用,其中偏见也可以改变现代工业和安全关键应用。但是,这个问题主要是在机器学习文献中忽略的。与社会应用相反,常识可以提供一组代理变量,也可以通过法规提请注意潜在风险,工业和关键安全应用的法规,这是大多数情况下盲目的。与不良偏见有关的变量确实可以间接地表示输入数据中,也可以是未知的,因此使它们更难解决。这引起了对基于AI的解决方案的商业部署的严重且有充分的问题,尤其是在新法规清楚地解决了AI中不需要的偏见所开发的问题的情况下。因此,我们在这里提议概述该领域的最新进展,首先是通过介绍如何证明自己的自己,然后探索不同的方式来揭露它们,并通过探索不同的可能性来减轻它们。我们终于提出了一种实用的工业公平遥控用例。
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years due to its concrete and potentially hazardous implications in society. In much the same manner, biases can also alter modern industrial and safety-critical applications where machine learning are based on high dimensional inputs such as images. This issue has however been mostly left out of the spotlight in the machine learning literature. Contrarily to societal applications where a set of proxy variables can be provided by the common sense or by regulations to draw the attention on potential risks, industrial and safety-critical applications are most of the times sailing blind. The variables related to undesired biases can indeed be indirectly represented in the input data, or can be unknown, thus making them harder to tackle. This raises serious and well-founded concerns towards the commercial deployment of AI-based solutions, especially in a context where new regulations clearly address the issues opened by undesired biases in AI. Consequently, we propose here to make an overview of recent advances in this area, firstly by presenting how such biases can demonstrate themselves, then by exploring different ways to bring them to light, and by probing different possibilities to mitigate them. We finally present a practical remote sensing use-case of industrial Fairness.