论文标题
部分可观测时空混沌系统的无模型预测
Feature Necessity & Relevancy in ML Classifier Explanations
论文作者
论文摘要
给定机器学习(ML)模型和预测,可以将解释定义为足以预测的功能集。在某些应用程序中,除了提出解释外,了解是否可以在某些解释中发生敏感特征,或者在所有解释中是否都必须发生非房屋功能。本文从分别将这些查询与基于逻辑的绑架中的相关性和必要性问题联系起来。然后,该论文证明了几个ML分类器家庭的成员资格和硬度结果。之后,本文提出了两个类别分类器的混凝土算法。实验结果证实了所提出的算法的可伸缩性。
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction. In some applications, and besides asking for an explanation, it is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations. This paper starts by relating such queries respectively with the problems of relevancy and necessity in logic-based abduction. The paper then proves membership and hardness results for several families of ML classifiers. Afterwards the paper proposes concrete algorithms for two classes of classifiers. The experimental results confirm the scalability of the proposed algorithms.