论文标题

公平算法设计:公平有效的机器调度

Fair Algorithm Design: Fair and Efficacious Machine Scheduling

论文作者

Niu, April, Totschnig, Agnes, Vetta, Adrian

论文摘要

受自动化决定算法引起的偏见引起的偏见的大量实例,最近对公平算法的设计产生了浓厚的兴趣。但是,公平与功效之间通常存在二分法:公平算法可能会提供低社会福利解决方案,而福利优化算法可能非常不公平。在机器调度问题中说明了这个问题,在这些问题中,对于$ n $ obs,任何公平解决方案的社会福利都可能是$ω(n)$的因素,比最佳福利差。在本文中,我们证明,如果我们允许可忽略不计的偏见,则可以克服这种二分法:存在“几乎完全公平的”算法,并且具有恒定的因素疗效比率,也就是说,可以保证在最佳福利范围内具有社交福利的输出解决方案。具体来说,对于任何$ε> 0 $,存在功效比$θ(\ frac {1}ε)$的机制,没有代理的代理比在可能的最公平解决方案中更差的$ε$分数(由不使用个人或类型数据的算法给出)。此外,这些双晶法仪的保证是紧密的,并适用于单机壳和多个机器外壳。结果的关键是使用帕累托调度机制。通过明智地使用个人或类型数据,这些机制能够利用受益于每个人的帕累托的改进;旨在满足群体公平的标准统计度量的公平调度算法,通常禁止这种帕累托的改进。我们预计这种范式是通过公平的算法明智地使用个人数据,以极大地提高绩效,而造成可忽略的偏见。

Motivated by a plethora of practical examples where bias is induced by automated-decision making algorithms, there has been strong recent interest in the design of fair algorithms. However, there is often a dichotomy between fairness and efficacy: fair algorithms may proffer low social welfare solutions whereas welfare optimizing algorithms may be very unfair. This issue is exemplified in the machine scheduling problem where, for $n$ jobs, the social welfare of any fair solution may be a factor $Ω(n)$ worse than the optimal welfare. In this paper, we prove that this dichotomy between fairness and efficacy can be overcome if we allow for a negligible amount of bias: there exist algorithms that are both "almost perfectly fair" and have a constant factor efficacy ratio, that is, are guaranteed to output solutions that have social welfare within a constant factor of optimal welfare. Specifically, for any $ε>0$, there exist mechanisms with efficacy ratio $Θ(\frac{1}ε)$ and where no agent is more than an $ε$ fraction worse off than they are in the fairest possible solution (given by an algorithm that does not use personal or type data). Moreover, these bicriteria guarantees are tight and apply to both the single machine case and the multiple machine case. The key to our results are the use of Pareto scheduling mechanisms. These mechanisms, by the judicious use of personal or type data, are able to exploit Pareto improvements that benefit every individual; such Pareto improvements would typically be forbidden by fair scheduling algorithms designed to satisfy standard statistical measures of group fairness. We anticipate this paradigm, the judicious use of personal data by a fair algorithm to greatly improve performance at the cost of negligible bias, has wider application.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源