论文标题
人类算法合作:实现互补性并避免不公平
Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
论文作者
论文摘要
机器学习研究的大部分都集中在预测精度上:鉴于任务,创建一种最大化精度的机器学习模型(或算法)。但是,在许多情况下,系统的最终预测或决定在人的控制之下,他们使用算法的输出以及自己的个人专业知识来产生合并的预测。这种协作系统的最终目标是“互补性”:也就是说,与人或算法相比,要造成较低的损失(等效地,更大的回报或实用性)。但是,实验结果表明,即使在精心设计的系统中,互补性能也难以捉摸。我们的工作提供了三个关键的贡献。首先,我们提供了一个理论框架,用于建模简单的人类算法系统,并证明可以在其中表达多个先前的分析。接下来,我们使用此模型来证明不可能互补性的条件,并为互补性可以实现的建设性示例。最后,我们讨论了我们发现的含义,尤其是关于分类器的公平性。总而言之,这些结果加深了我们对影响人类算法系统综合性能的关键因素的理解,从而深入了解算法工具如何最好地用于协作环境。
Much of machine learning research focuses on predictive accuracy: given a task, create a machine learning model (or algorithm) that maximizes accuracy. In many settings, however, the final prediction or decision of a system is under the control of a human, who uses an algorithm's output along with their own personal expertise in order to produce a combined prediction. One ultimate goal of such collaborative systems is "complementarity": that is, to produce lower loss (equivalently, greater payoff or utility) than either the human or algorithm alone. However, experimental results have shown that even in carefully-designed systems, complementary performance can be elusive. Our work provides three key contributions. First, we provide a theoretical framework for modeling simple human-algorithm systems and demonstrate that multiple prior analyses can be expressed within it. Next, we use this model to prove conditions where complementarity is impossible, and give constructive examples of where complementarity is achievable. Finally, we discuss the implications of our findings, especially with respect to the fairness of a classifier. In sum, these results deepen our understanding of key factors influencing the combined performance of human-algorithm systems, giving insight into how algorithmic tools can best be designed for collaborative environments.