论文标题

减轻知识图上的关系偏见

Mitigating Relational Bias on Knowledge Graphs

论文作者

Chuang, Yu-Neng, Lai, Kwei-Herng, Tang, Ruixiang, Du, Mengnan, Chang, Chia-Yuan, Zou, Na, Hu, Xia

论文摘要

知识图数据在现实世界应用中很普遍,知识图神经网络(KGNN)是知识图表示学习的必要技术。尽管KGNN有效地从知识图中对结构信息进行了建模,但这些框架扩大了基本数据偏见,从而导致对某些群体或个人的歧视。此外,由于现有的辩论方法主要集中于实体偏见,因此消除了知识图中普遍存在的多跳的关系偏见仍然是一个悬而未决的问题。但是,由于知识图的偏见和非线性接近性结构的路径的稀疏,消除关系偏见非常具有挑战性。为了应对挑战,我们提出了一个KGNN框架Fair-Kgnn,同时减轻了多跳的偏见并保留了知识图中实体与关系的接近信息。提出的框架可以推广,以减轻所有类型的KGNN的关系偏差。我们开发了两种与两种最先进的KGNN模型RGCN和COMPGCN结合在一起的实例,以减轻性别占领和国籍偏见。在三个基准知识图数据集上进行的实验表明,在表示KGNN模型的预测性能的同时,Fair-KGNN可以有效地减轻表示过程中的不公平情况。

Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge graphs, these frameworks amplify the underlying data bias that leads to discrimination towards certain groups or individuals in resulting applications. Additionally, as existing debiasing approaches mainly focus on the entity-wise bias, eliminating the multi-hop relational bias that pervasively exists in knowledge graphs remains an open question. However, it is very challenging to eliminate relational bias due to the sparsity of the paths that generate the bias and the non-linear proximity structure of knowledge graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs. The proposed framework is generalizable to mitigate the relational bias for all types of KGNN. We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias. The experiments carried out on three benchmark knowledge graph datasets demonstrate that the Fair-KGNN can effectively mitigate unfair situations during representation learning while preserving the predictive performance of KGNN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源