论文标题

机理设计中的解释性:最新进展和前进的道路

Explainability in Mechanism Design: Recent Advances and the Road Ahead

论文作者

Suryanarayana, Sharadhi Alape, Sarne, David, Kraus, Sarit

论文摘要

设计和实施可解释的系统被视为迈向增加用户信任,接受和依赖人工智能(AI)系统的下一步。在解释了由黑框算法(例如机器学习和深度学习)做出的选择的同时,大多数众人瞩目的焦点,但在社会选择的背景下试图解释决策(甚至简单)的系统正在稳步追赶。在本文中,我们对机理设计中的解释性进行了全面的调查,这是一个以经济动机的代理为特征的领域,通常没有任何选择最大化所有单个效用功能。我们讨论了机理设计中解释性的主要属性和目标,并将其与一般AI的可解释AI区分开。讨论之后,对人们在研究可解释的机制设计时可能面临的挑战进行了详尽的审查,并向这些挑战提出了一些解决方案概念。

Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of the challenges one may face when working on Explainable Mechanism Design and propose a few solution concepts to those.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源