论文标题

Cyber​​trust:从可解释到可操作和可解释的AI(AI2)

Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)

论文作者

Galaitsi, Stephanie, Trump, Benjamin D., Keisler, Jeffrey M., Linkov, Igor, Kott, Alexander

论文摘要

为了从AI的进步中受益,AI系统的用户和运营商必须有理由信任它。信任是由多种互动引起的,随着时间的流逝,可预测和可取的行为得到加强。向系统的用户提供对AI操作的一定了解可以支持可预测性,但迫使AI解释自身的风险将AI功能限制为仅与人类认知对帐的人。我们认为,应通过将决策分析的观点和正式工具纳入AI来设计AI系统的功能,以建立信任。我们应该开发可解释和可行的AI,而不是试图实现可解释的AI。可操作且可解释的AI(AI2)将在AI建议中纳入用户信心的明确量化和可视化。这样一来,它将允许检查和测试AI系统预测,以建立对系统决策做出的信任的基础,并确保部署和推进其计算能力方面的广泛利益。

To benefit from AI advances, users and operators of AI systems must have reason to trust it. Trust arises from multiple interactions, where predictable and desirable behavior is reinforced over time. Providing the system's users with some understanding of AI operations can support predictability, but forcing AI to explain itself risks constraining AI capabilities to only those reconcilable with human cognition. We argue that AI systems should be designed with features that build trust by bringing decision-analytic perspectives and formal tools into AI. Instead of trying to achieve explainable AI, we should develop interpretable and actionable AI. Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations. In doing so, it will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making and ensure broad benefits from deploying and advancing its computational capabilities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源