论文标题

使用情感归一化的自动决策中公平性的新颖方法

A Novel Approach to Fairness in Automated Decision-Making using Affective Normalization

论文作者

Hoey, Jesse, Chan, Gabrielle

论文摘要

任何关于雇用谁的决定,都涉及两个组成部分。首先,一个理性的组成部分,即他们接受了良好的教育,他们会说清楚。其次,一种基于可观察到的情感组成部分,例如种族和性别的视觉特征,并且可能被刻板印象偏见。在这里,我们提出了一种测量情感,社会偏见,组成部分的方法,从而使其去除。也就是说,给定决策过程,这些情感测量消除了决策中的情感偏见,使其在方法本身定义的一组类别中公平。因此,我们建议这可以解决交叉公平性的三个关键问题:(1)公平性考虑的类别的定义; (2)无限的回归成越来越小的群体; (3)确保基于基本人权或其他先前信息的公平分配。本文中的主要思想是,可以使用情感连贯性来衡量公平偏见,并且可以将其用于标准化结果映射。我们旨在实现这项概念上的工作,以揭示一种新颖的方法来处理公平问题,该方法将情感连贯性作为对偏见的独立度量,超越了统计平等。

Any decision, such as one about who to hire, involves two components. First, a rational component, i.e., they have a good education, they speak clearly. Second, an affective component, based on observables such as visual features of race and gender, and possibly biased by stereotypes. Here we propose a method for measuring the affective, socially biased, component, thus enabling its removal. That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself. We thus propose that this may solve three key problems in intersectional fairness: (1) the definition of categories over which fairness is a consideration; (2) an infinite regress into smaller and smaller groups; and (3) ensuring a fair distribution based on basic human rights or other prior information. The primary idea in this paper is that fairness biases can be measured using affective coherence, and that this can be used to normalize outcome mappings. We aim for this conceptual work to expose a novel method for handling fairness problems that uses emotional coherence as an independent measure of bias that goes beyond statistical parity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源