论文标题
基于项目响应理论的示例解释
Explanation-by-Example Based on Item Response Theory
论文作者
论文摘要
使用机器学习分类算法的智能系统在日常社会中越来越普遍。但是,许多系统都使用黑框模型,这些模型没有特征可以自我解释其预测。这种情况使该领域和社会的研究人员提出了以下问题:我如何相信我无法理解的模型的预测?从这个意义上讲,XAI是AI领域,旨在创建能够向最终用户解释分类器的决策的技术。结果,已经出现了几种技术,例如逐个示例的解释,该技术由目前与XAI合作的社区合并了一些计划。这项研究探讨了项目响应理论(IRT),作为解释模型并衡量逐个示例方法的可靠性水平的工具。为此,使用了四个具有不同复杂程度的数据集,并将随机森林模型用作假设检验。从测试集中,有83.8%的错误来自IRT指出模型不可靠的实例。
Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initiatives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hypothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.