论文标题
人工智能安全的人为因素
The Human Factor in AI Safety
论文作者
论文摘要
基于AI的系统已在各个行业中广泛用于不同的决策,从运营决策到战术和战略性的,在低风险和高风险环境中。逐渐公开报告了这些系统的弱点和问题,包括道德问题,偏见的决定,不安全的结果和不公平的决定,仅举几例。研究倾向于优化AI的重点是其风险和意外的负面后果。认识到这种严重的潜在风险和重新搜索的稀缺性,我专注于AI的不安全结果。具体来说,我从AI部署期间的人类互动镜头探索了这个问题。将讨论个人和人工智能在部署期间的相互作用如何带来新的问题,这些问题需要扎实而整体的缓解计划。只有AI算法的安全性不足以确保其操作安全,这将被认为是不可思议的。在AI风险管理期间,应考虑基于AI的系统的最终用户及其决策原型。使用一些实际情况,将强调说,用户的决策原型应被视为基于AI的系统的设计原理。
AI-based systems have been used widely across various industries for different decisions ranging from operational decisions to tactical and strategic ones in low- and high-stakes contexts. Gradually the weaknesses and issues of these systems have been publicly reported including, ethical issues, biased decisions, unsafe outcomes, and unfair decisions, to name a few. Research has tended to optimize AI less has focused on its risk and unexpected negative consequences. Acknowledging this serious potential risks and scarcity of re-search I focus on unsafe outcomes of AI. Specifically, I explore this issue from a Human-AI interaction lens during AI deployment. It will be discussed how the interaction of individuals and AI during its deployment brings new concerns, which need a solid and holistic mitigation plan. It will be dis-cussed that only AI algorithms' safety is not enough to make its operation safe. The AI-based systems' end-users and their decision-making archetypes during collaboration with these systems should be considered during the AI risk management. Using some real-world scenarios, it will be highlighted that decision-making archetypes of users should be considered a design principle in AI-based systems.