论文标题
可解释的模型不足的相似性和对面部验证的信心
Explainable Model-Agnostic Similarity and Confidence in Face Verification
论文作者
论文摘要
最近,面部识别系统表现出了出色的表现,因此在我们的日常生活中发挥了至关重要的作用。在许多情况下,他们已经超过了人的面部验证责任。但是,他们缺乏预测的解释。与人类操作员相比,典型的面部识别网络系统仅产生二进制决策,而无需进一步的解释和对这些决定的见解。这项工作着重于面部识别系统的解释,对开发人员和运营商至关重要。首先,我们根据两个输入图像之间的面部特征距离和跨数据集的距离分布引入了这些系统的置信分数。其次,我们建立了一种新颖的可视化方法,以从面部识别系统中获得更有意义的预测,该系统基于图像的系统遮挡绘制距离偏差。结果与原始图像混合在一起,并突出显示相似和不同的面部区域。最后,我们计算了几个最先进的面部验证数据集的置信分数和解释图,并在网络平台上发布结果。我们优化了用于用户友好型交互的平台,并希望进一步改善对机器学习决策的理解。源代码可在GitHub上找到,Web平台可在http://explainable-face-verification.ey.r.appspot.com上公开获得。
Recently, face recognition systems have demonstrated remarkable performances and thus gained a vital role in our daily life. They already surpass human face verification accountability in many scenarios. However, they lack explanations for their predictions. Compared to human operators, typical face recognition network system generate only binary decisions without further explanation and insights into those decisions. This work focuses on explanations for face recognition systems, vital for developers and operators. First, we introduce a confidence score for those systems based on facial feature distances between two input images and the distribution of distances across a dataset. Secondly, we establish a novel visualization approach to obtain more meaningful predictions from a face recognition system, which maps the distance deviation based on a systematic occlusion of images. The result is blended with the original images and highlights similar and dissimilar facial regions. Lastly, we calculate confidence scores and explanation maps for several state-of-the-art face verification datasets and release the results on a web platform. We optimize the platform for a user-friendly interaction and hope to further improve the understanding of machine learning decisions. The source code is available on GitHub, and the web platform is publicly available at http://explainable-face-verification.ey.r.appspot.com.