论文标题

有罪(硅)思想:人机团队中的责备和责任

The Guilty (Silicon) Mind: Blameworthiness and Liability in Human-Machine Teaming

论文作者

Walker-Munro, Dr Brendan, Assaad, Dr Zena

论文摘要

当人类科学将界限推向人工智能的发展(AI)时,进步的扫描使学者和政策制定者都质疑在各种人类努力中应用或利用AI的合法性。例如,关于将AI应用于武器系统形成致命的自主武器系统(法律)的合法性的辩论。然而,即使将AI应用于未武器的军事自治系统时,该论点也是如此:如何使机器对犯罪负责?那侵权呢?人造代理可以理解其说明的道德和道德内容吗?这些是棘手的问题,在许多情况下,这些问题已经在负面回答,因为人造实体缺乏任何有意义的道德机构。那么,如果AI并不孤单,而是由人类与自己的道德和道德理解和义务联系在一起或与人相关联该怎么办?谁负责可能犯下的任何渎职?人类会承担AI不道德或不道德决定的法律风险吗?这些是该手稿寻求与之互动的一些问题。

As human science pushes the boundaries towards the development of artificial intelligence (AI), the sweep of progress has caused scholars and policymakers alike to question the legality of applying or utilising AI in various human endeavours. For example, debate has raged in international scholarship about the legitimacy of applying AI to weapon systems to form lethal autonomous weapon systems (LAWS). Yet the argument holds true even when AI is applied to a military autonomous system that is not weaponised: how does one hold a machine accountable for a crime? What about a tort? Can an artificial agent understand the moral and ethical content of its instructions? These are thorny questions, and in many cases these questions have been answered in the negative, as artificial entities lack any contingent moral agency. So what if the AI is not alone, but linked with or overseen by a human being, with their own moral and ethical understandings and obligations? Who is responsible for any malfeasance that may be committed? Does the human bear the legal risks of unethical or immoral decisions by an AI? These are some of the questions this manuscript seeks to engage with.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源