论文标题
监视对公共部门应用的人机互动的信任
Monitoring Trust in Human-Machine Interactions for Public Sector Applications
论文作者
论文摘要
此处报告的工作涉及使用脑电图(EEG)和电力皮肤反应(GSR)使用AI支持的人机相互作用(HMI)来检测人类的信任水平,以探讨心理生理传感器和措施的能力。对脑电图和GSR数据分析的改进可能会创造出比传统工具的性能或更好的模型。分析脑电图和GSR数据的挑战是由于测量中大量变量所需的大量培训数据。研究人员通常使用标准的机器学习分类器,例如人工神经网络(ANN),支持矢量机(SVM)和K-Nearest邻居(KNN)。传统上,这些提供了几乎没有见解的脑电图和GSR数据的特征促进了越来越准确的预测,因此更难改善HMI和人机信任关系。将信任传感器研究结果应用于实际情况和监视工作环境中的信任的关键要素是对哪些关键功能有助于信任,然后减少实际应用所需的数据量的理解。我们使用局部可解释的模型解释(LIME)模型来减少监视和增强对HMI系统信任所需的数据量的过程,这是一种对政府和公共部门应用程序可能有价值的技术。可解释的AI可以使HMI系统透明并促进信任。从政府机构和社区级的非营利公共服务组织的客户服务到国家军事和网络安全机构,许多公共部门组织越来越担心拥有有效且道德的HMI,并具有信任,无偏见的和无意外的负面影响的服务。
The work reported here addresses the capacity of psychophysiological sensors and measures using Electroencephalogram (EEG) and Galvanic Skin Response (GSR) to detect levels of trust for humans using AI-supported Human-Machine Interaction (HMI). Improvements to the analysis of EEG and GSR data may create models that perform as well, or better than, traditional tools. A challenge to analyzing the EEG and GSR data is the large amount of training data required due to a large number of variables in the measurements. Researchers have routinely used standard machine-learning classifiers like artificial neural networks (ANN), support vector machines (SVM), and K-nearest neighbors (KNN). Traditionally, these have provided few insights into which features of the EEG and GSR data facilitate the more and least accurate predictions - thus making it harder to improve the HMI and human-machine trust relationship. A key ingredient to applying trust-sensor research results to practical situations and monitoring trust in work environments is the understanding of which key features are contributing to trust and then reducing the amount of data needed for practical applications. We used the Local Interpretable Model-agnostic Explanations (LIME) model as a process to reduce the volume of data required to monitor and enhance trust in HMI systems - a technology that could be valuable for governmental and public sector applications. Explainable AI can make HMI systems transparent and promote trust. From customer service in government agencies and community-level non-profit public service organizations to national military and cybersecurity institutions, many public sector organizations are increasingly concerned to have effective and ethical HMI with services that are trustworthy, unbiased, and free of unintended negative consequences.