论文标题
可行的对序性数据的机器学习模型的解释:与痴呆有关的搅动用例
Actionable Interpretation of Machine Learning Models for Sequential Data: Dementia-related Agitation Use Case
论文作者
论文摘要
机器学习已显示出对复杂学习问题的成功,在复杂的学习问题中,数据/参数可能是多维的,并且对于基于第一原理的分析而言太复杂了。一些利用机器学习的应用需要人类的解释性,而不仅仅是了解特定结果(分类,检测等),还需要根据该结果采取行动。已经研究了Black-Box机器学习模型解释,但是最近的工作集中在验证和改善模型性能上。在这项工作中,提出了对黑盒机器学习模型的可行解释。提出的技术着重于提取可行措施,以帮助用户做出决定或采取行动。可操作的解释可以在大多数传统的黑箱机器学习模型中实现。它使用已经训练的模型,使用的培训数据和数据处理技术来从模型结果及其时间序列输入中提取可起诉的项目。具有用例显示了可行解释的实施:与痴呆有关的搅动预测和环境环境。结果表明,可以提取可操作的项目,例如触发搅动发作的室内光水平的降低。这种可行解释的用例可以帮助痴呆症护理人员采取行动以干预和防止搅动。
Machine learning has shown successes for complex learning problems in which data/parameters can be multidimensional and too complex for a first-principles based analysis. Some applications that utilize machine learning require human interpretability, not just to understand a particular result (classification, detection, etc.) but also for humans to take action based on that result. Black-box machine learning model interpretation has been studied, but recent work has focused on validation and improving model performance. In this work, an actionable interpretation of black-box machine learning models is presented. The proposed technique focuses on the extraction of actionable measures to help users make a decision or take an action. Actionable interpretation can be implemented in most traditional black-box machine learning models. It uses the already trained model, used training data, and data processing techniques to extract actionable items from the model outcome and its time-series inputs. An implementation of the actionable interpretation is shown with a use case: dementia-related agitation prediction and the ambient environment. It is shown that actionable items can be extracted, such as the decreasing of in-home light level, which is triggering an agitation episode. This use case of actionable interpretation can help dementia caregivers take action to intervene and prevent agitation.