论文标题
对抗性学习用于偏见知识图嵌入
Adversarial Learning for Debiasing Knowledge Graph Embeddings
论文作者
论文摘要
知识图(KG)在学术界和工业中都在引起人们的关注。尽管有多样化的好处,但最近的研究还是确定了从KGS学到的代表中嵌入的社会和文化偏见。随着KG的应用开始与社会领域相交并相互作用,这种偏见会对不同的人口和少数群体产生不利的后果。本文旨在识别和减轻知识图(kg)嵌入中的这种偏见。作为第一步,我们探讨了受欢迎程度的偏见 - 节点受欢迎程度与链接预测准确性之间的关系。在Node2Vec图嵌入的情况下,我们发现嵌入的预测准确性与节点的程度负相关。但是,如果知识图嵌入(KGE),我们观察到相反的趋势。作为第二步,我们探讨了KGE中的性别偏见,对流行的KGE算法进行仔细检查表明,可以从嵌入中预测敏感属性,例如一个人的性别。这意味着流行kg中的这种偏见是由嵌入的结构特性所捕获的。作为对KGS的初步解决方案,我们引入了一个新颖的框架,以滤除KG嵌入中的敏感属性信息,我们称之为风扇(过滤对抗性网络)。我们还建议FAN在将来可以探索的其他网络嵌入式嵌入方式中的适用性。
Knowledge Graphs (KG) are gaining increasing attention in both academia and industry. Despite their diverse benefits, recent research have identified social and cultural biases embedded in the representations learned from KGs. Such biases can have detrimental consequences on different population and minority groups as applications of KG begin to intersect and interact with social spheres. This paper aims at identifying and mitigating such biases in Knowledge Graph (KG) embeddings. As a first step, we explore popularity bias -- the relationship between node popularity and link prediction accuracy. In case of node2vec graph embeddings, we find that prediction accuracy of the embedding is negatively correlated with the degree of the node. However, in case of knowledge-graph embeddings (KGE), we observe an opposite trend. As a second step, we explore gender bias in KGE, and a careful examination of popular KGE algorithms suggest that sensitive attribute like the gender of a person can be predicted from the embedding. This implies that such biases in popular KGs is captured by the structural properties of the embedding. As a preliminary solution to debiasing KGs, we introduce a novel framework to filter out the sensitive attribute information from the KG embeddings, which we call FAN (Filtering Adversarial Network). We also suggest the applicability of FAN for debiasing other network embeddings which could be explored in future work.