论文标题

单个用户在可解释和可解释的机器学习系统中的作用

The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

论文作者

Gleaves, Lydia P., Schwartz, Reva, Broniatowski, David A.

论文摘要

有助于帮助非专业受众有效与机器学习(ML)工具相互作用并了解这种系统产生的复杂输出的兴趣增加。在这里,我们描述了旨在研究个人技能和人格特征如何预测ML生成模型输出的可解释性,解释性和知识发现的用户实验。我们的工作依靠模糊痕迹理论,这是人类如何处理数值刺激的主要理论,以研究不同最终用户在与ML系统互动时如何解释他们收到的输出。虽然我们的样本很小,但我们发现解释性(能够理解系统输出和解释性) - 了解该输出是如何生成的 - 是用户体验的不同方面。此外,如果受试者拥有促进元认知监测和编辑的单个特征,则可以解释模型输出,并与更详细的逐字处理ML输出处理有关。最后,对ML系统更熟悉的受试者感到更好地支持了它们,并且能够发现数据中的新模式。但是,这并不一定转化为有意义的见解。我们的工作激发了在设计过程中明确考虑用户心理表示的系统设计,以更有效地支持最终用户的需求。

There is increased interest in assisting non-expert audiences to effectively interact with machine learning (ML) tools and understand the complex output such systems produce. Here, we describe user experiments designed to study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from ML generated model output. Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli, to examine how different end users will interpret the output they receive while interacting with the ML system. While our sample was small, we found that interpretability -- being able to make sense of system output -- and explainability -- understanding how that output was generated -- were distinct aspects of user experience. Additionally, subjects were more able to interpret model output if they possessed individual traits that promote metacognitive monitoring and editing, associated with more detailed, verbatim, processing of ML output. Finally, subjects who are more familiar with ML systems felt better supported by them and more able to discover new patterns in data; however, this did not necessarily translate to meaningful insights. Our work motivates the design of systems that explicitly take users' mental representations into account during the design process to more effectively support end user requirements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源