论文标题
解释疾病治疗信息提取的黑框文本分类器
Explaining black-box text classifiers for disease-treatment information extraction
论文作者
论文摘要
深层神经网络和其他复杂的人工智能(AI)模型在许多生物医学自然语言处理任务上已经达到了高度的准确性。但是,由于其模糊的内部工作和决策逻辑,它们在实际用例中的适用性可能会受到限制。事后解释方法可以通过提取特征值和结果之间的关系来近似黑盒AI模型的行为。在本文中,我们介绍了一种事后解释方法,该方法利用自信的项目集近似黑盒分类器的行为进行医疗信息提取。我们的解释器将医学概念和语义纳入说明过程中,在黑盒分类器的决策空间的不同部分中找到了输入和输出之间的语义关系。实验结果表明,我们的解释方法可以胜过扰动和基于决策集的解释者,从忠诚度和解释性来预测疾病治疗信息提取任务的解释。
Deep neural networks and other intricate Artificial Intelligence (AI) models have reached high levels of accuracy on many biomedical natural language processing tasks. However, their applicability in real-world use cases may be limited due to their vague inner working and decision logic. A post-hoc explanation method can approximate the behavior of a black-box AI model by extracting relationships between feature values and outcomes. In this paper, we introduce a post-hoc explanation method that utilizes confident itemsets to approximate the behavior of black-box classifiers for medical information extraction. Incorporating medical concepts and semantics into the explanation process, our explanator finds semantic relations between inputs and outputs in different parts of the decision space of a black-box classifier. The experimental results show that our explanation method can outperform perturbation and decision set based explanators in terms of fidelity and interpretability of explanations produced for predictions on a disease-treatment information extraction task.