论文标题
评估树的解释方法的异常推理方法:Shap TreePlainer和Tree Interpreter的案例研究
Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter
论文作者
论文摘要
在许多应用中,了解机器学习模型做出的预测至关重要。在这项工作中,我们研究了两种用于解释基于树的模型的方法 - 树解释器(TI)和Shapley添加说明TreeExplainer(Shap-te)。使用有关利用云计算平台的应用程序中检测异常的案例研究,我们使用各种指标进行比较这些方法,包括计算时间,归因价值的重要性以及解释精度。我们发现,尽管SHAP-TE提供了对TI的一致性保证,但以增加计算为代价,但一致性并不一定会改善我们案例研究中的解释性能。
Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study.