论文标题
不同的群体如何优先考虑负责AI的道德价值观
How Different Groups Prioritize Ethical Values for Responsible AI
论文作者
论文摘要
私营公司,公共部门组织和学术团体概述了他们认为对负责任人工智能技术重要的道德价值观。尽管他们的建议在一组中心价值上汇聚在一起,但对更具代表性的公众认为对与他们互动并可能受到影响的AI技术很重要的价值知之甚少。我们进行了一项调查,研究了个人如何在三个组中感知和优先考虑负责任的AI值:美国人口的代表性样本(n = 743),人群工人的样本(n = 755)和AI从业者样本(n = 175)。我们的结果从经验上证实了一个普遍的问题:AI从业者的价值优先级与公众的价值不同。与美国代表样本相比,AI从业人员似乎认为负责的AI值不那么重要,并且强调了一组不同的值。相比之下,自我认同的妇女和黑人受访者发现,负责的AI值比其他群体更重要。令人惊讶的是,比其他群体更有可能优先考虑公平的参与者,而不是报道歧视经验的参与者更有可能优先考虑公平。我们的发现强调了关注谁定义负责人AI的重要性。
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by. We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups: a representative sample of the US population (N=743), a sample of crowdworkers (N=755), and a sample of AI practitioners (N=175). Our results empirically confirm a common concern: AI practitioners' value priorities differ from those of the general public. Compared to the US-representative sample, AI practitioners appear to consider responsible AI values as less important and emphasize a different set of values. In contrast, self-identified women and black respondents found responsible AI values more important than other groups. Surprisingly, more liberal-leaning participants, rather than participants reporting experiences with discrimination, were more likely to prioritize fairness than other groups. Our findings highlight the importance of paying attention to who gets to define responsible AI.