论文标题
人工智能,不透明度和个人自主权
AI, Opacity, and Personal Autonomy
论文作者
论文摘要
机器学习的进步推动了在保释听证会(Feller等,2016),医学诊断(Rajkomar等,2018; Esteva等人2019)和招聘(Heilweil 2019,Van Esch etal。2019,van Esch etal。2019)等程序中使用AI决策算法的普及。学术文章(Floridi等人,2018年),政策文本(HLEG 2019)和普及书籍(O'Neill 2016,Eubanks 2018)都警告说,这种算法往往是_opaque_:他们没有提供有关其胜利的解释。基于因果关系的因果关系以及有关因果解释价值的最新工作(Lombrozo 2011,Hitchcock,2012年,Hitchcock 2012),我对不透明算法的道德关注尚未在文学中获得系统待遇:在文献中我们可以在逐渐削弱的范围内妨碍我们在文献中使用这种算法的范围,从而可以妨碍我们的范围,这些算法可以根据我们的生活有效地削弱了我们的生命,我们可以在我们的范围内脱颖而出。自治。我认为,这种关注值得引起人们的关注,因为它提供了新工具和新挑战的算法决策透明度的呼吁。
Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings (Feller et al. 2016), medical diagnoses (Rajkomar et al. 2018; Esteva et al. 2019) and recruitment (Heilweil 2019, Van Esch et al. 2019). Academic articles (Floridi et al. 2018), policy texts (HLEG 2019), and popularizing books (O'Neill 2016, Eubanks 2018) alike warn that such algorithms tend to be _opaque_: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation (Lombrozo 2011, Hitchcock 2012), I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.