论文标题
可解释模型的可视化和知识发现
Visualisation and knowledge discovery from interpretable models
论文作者
论文摘要
越来越多地影响人类生活的部门正在使用机器学习(ML)工具。因此,需要了解其工作机制并评估他们在决策中的公平性,变得至关重要,在可解释的AI(XAI)时代迎来。在此贡献中,我们引入了一些本质上可以解释的模型,除了从数据集中提取知识以及有关问题的知识外,这些模型也能够处理丢失的值。这些模型还能够可视化分类器和决策边界:它们是学习矢量量化的基于角度的变体。我们已经证明了合成数据集和现实世界中的算法(来自UCI存储库的心脏病数据集)。新开发的分类器有助于将UCI数据集的复杂性作为多类问题。当数据集被视为二进制类问题时,开发的分类器的性能与该数据集文献中报告的数据相当,具有可解释性的其他价值。
Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hence the need for understanding their working mechanism and evaluating their fairness in decision-making, are becoming paramount, ushering in the era of Explainable AI (XAI). In this contribution we introduced a few intrinsically interpretable models which are also capable of dealing with missing values, in addition to extracting knowledge from the dataset and about the problem. These models are also capable of visualisation of the classifier and decision boundaries: they are the angle based variants of Learning Vector Quantization. We have demonstrated the algorithms on a synthetic dataset and a real-world one (heart disease dataset from the UCI repository). The newly developed classifiers helped in investigating the complexities of the UCI dataset as a multiclass problem. The performance of the developed classifiers were comparable to those reported in literature for this dataset, with additional value of interpretability, when the dataset was treated as a binary class problem.