论文标题

评估当地模型不足的解释,以通过决策路径对模型进行排名

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

论文作者

Rahnama, Amir Hossein Akhavan, Butepage, Judith

论文摘要

人们认为,学习到秩(LTR)模型的局部解释可以提取最重要的功能,这些功能有助于LTR模型对单个数据点预测的排名。评估此类解释的准确性是具有挑战性的,因为大多数现代LTR模型都无法获得地面真实特征的重要性得分。在这项工作中,我们为LTR模型的解释提出了一种系统的评估技术。我们建议不使用黑框模型(例如神经网络),而是建议专注于基于树的LTR模型,从中我们可以使用决策路径从中提取地面真实特征的重要性得分。提取后,我们可以直接将地面真实特征的重要性分数与用解释技术产生的特征重要性得分进行比较。我们比较了最近在MQ2008数据集上使用决策树和梯度增强模型时,最近提出的LTR模型的两种解释技术。我们表明,这些技术的解释精度在很大程度上取决于解释的模型,甚至可以解释哪个数据点。

Local explanations of learning-to-rank (LTR) models are thought to extract the most important features that contribute to the ranking predicted by the LTR model for a single data point. Evaluating the accuracy of such explanations is challenging since the ground truth feature importance scores are not available for most modern LTR models. In this work, we propose a systematic evaluation technique for explanations of LTR models. Instead of using black-box models, such as neural networks, we propose to focus on tree-based LTR models, from which we can extract the ground truth feature importance scores using decision paths. Once extracted, we can directly compare the ground truth feature importance scores to the feature importance scores generated with explanation techniques. We compare two recently proposed explanation techniques for LTR models when using decision trees and gradient boosting models on the MQ2008 dataset. We show that the explanation accuracy in these techniques can largely vary depending on the explained model and even which data point is explained.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源