论文标题

xai在预测过程监视的背景下:太多了

XAI in the context of Predictive Process Monitoring: Too much to Reveal

论文作者

Elkhawaga, Ghada, Abuelkheir, Mervat, Reichert, Manfred

论文摘要

预测过程监视(PPM)已作为增值任务集成到过程挖掘工具中。 PPM对运行业务流程的进一步执行提供了有用的预测。为此,基于机器学习的技术在PPM的背景下广泛使用。为了获得利益相关者的信任和PPM预测的倡导,采用了可解释的人工智能(XAI)方法,以弥补缺乏最有效的预测模型的透明度。即使在相同的设置下使用有关数据,预处理技术和ML模型,多种XAI方法产生的解释也有很大差异。缺少比较来区分XAI特征或基本条件,这些条件是解释的确定性。为了解决这一差距,我们提供了一个框架,以研究不同的PPM相关设置以及与ML模型相关的选择对产生解释的特征和表现力的影响。此外,我们比较了不同的解释性方法特征如何塑造产生的解释并启用反映基本模型推理过程

Predictive Process Monitoring (PPM) has been integrated into process mining tools as a value-adding task. PPM provides useful predictions on the further execution of the running business processes. To this end, machine learning-based techniques are widely employed in the context of PPM. In order to gain stakeholders trust and advocacy of PPM predictions, eXplainable Artificial Intelligence (XAI) methods are employed in order to compensate for the lack of transparency of most efficient predictive models. Even when employed under the same settings regarding data, preprocessing techniques, and ML models, explanations generated by multiple XAI methods differ profoundly. A comparison is missing to distinguish XAI characteristics or underlying conditions that are deterministic to an explanation. To address this gap, we provide a framework to enable studying the effect of different PPM-related settings and ML model-related choices on characteristics and expressiveness of resulting explanations. In addition, we compare how different explainability methods characteristics can shape resulting explanations and enable reflecting underlying model reasoning process

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源