论文标题

合成视觉和热面膜图像的公平性

Fairness on Synthetic Visual and Thermal Mask Images

论文作者

Lai, Kenneth, Shmerko, Vlad, Yanushkevich, Svetlana

论文摘要

在本文中,我们研究视觉和热图像的性能和公平性,并将评估扩展到掩盖的合成图像。使用SeadyFace和Thermal面膜数据集,我们提出了一个过程来评估真实图像的公平性,并显示如何将同一过程应用于合成图像。随机猜测的人口统计差异为1.59,当识别性能提高到99.99 \%时,人口统计学差异为1.59。我们表明,固有的偏见数据集可以深深影响任何生物识别系统的公平性。偏见数据集的主要原因是由于数据收集过程而导致的类不平衡。为了解决不平衡的数据集,可以使用合成图像来增强样本较少的类,以生成更平衡的数据集,从而在训练机器学习系统时会产生较少的偏见。对于基于生物识别的系统,公平性至关重要,而相关的公平,多样性和包容性(EDI)的概念非常适合于生物识别技术公平性的概括,我们专注于三个最常见的人口统计学群体,性别,性别,性别和种族。

In this paper, we study performance and fairness on visual and thermal images and expand the assessment to masked synthetic images. Using the SpeakingFace and Thermal-Mask dataset, we propose a process to assess fairness on real images and show how the same process can be applied to synthetic images. The resulting process shows a demographic parity difference of 1.59 for random guessing and increases to 5.0 when the recognition performance increases to a precision and recall rate of 99.99\%. We indicate that inherently biased datasets can deeply impact the fairness of any biometric system. A primary cause of a biased dataset is the class imbalance due to the data collection process. To address imbalanced datasets, the classes with fewer samples can be augmented with synthetic images to generate a more balanced dataset resulting in less bias when training a machine learning system. For biometric-enabled systems, fairness is of critical importance, while the related concept of Equity, Diversity, and Inclusion (EDI) is well suited for the generalization of fairness in biometrics, in this paper, we focus on the 3 most common demographic groups age, gender, and ethnicity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源