论文标题

Fairrec:公平意识的新闻推荐,分解了对抗性学习

FairRec: Fairness-aware News Recommendation with Decomposed Adversarial Learning

论文作者

Wu, Chuhan, Wu, Fangzhao, Wang, Xiting, Huang, Yongfeng, Xie, Xing

论文摘要

新闻建议对于在线新闻服务很重要。现有的新闻推荐模型通常是从​​用户的新闻点击行为中学到的。通常,具有相同敏感属性的用户的行为(例如,性别)具有相似的模式,新闻推荐模型可以轻松捕获这些模式。它可能会导致一些与敏感用户属性有关的偏见,例如,始终向男性用户推荐体育新闻,这是不公平的,因为用户可能无法收到多种新闻信息。在本文中,我们提出了一种公平意识的新闻推荐方法,具有分解的对抗性学习和正交性正则化,这可以减轻敏感用户属性偏见带来的新闻建议中的不公平性。在我们的方法中,我们建议将用户兴趣模型分解为两个组件。一个组件旨在学习一个偏见的用户嵌入,以捕获有关敏感用户属性的偏见信息,而另一个旨在学习仅编码独立于属性的用户兴趣信息的无偏见用户嵌入,以了解公平意识的新闻建议。此外,我们建议将属性预测任务应用于偏见的用户嵌入以增强其在偏差建模上的能力,并将对抗性学习应用于无偏见的用户嵌入以从中删除偏见信息。此外,我们提出了一种正交性正则化方法,以鼓励无偏见的用户嵌入与偏见感知者正交,以更好地区分无偏见的用户与偏见感知者的嵌入。对于公平意识的新闻排名,我们仅使用无偏见的用户嵌入。基准数据集的广泛实验表明,我们的方法可以有效地提高新闻建议的公平性,而较小的性能损失。

News recommendation is important for online news services. Existing news recommendation models are usually learned from users' news click behaviors. Usually the behaviors of users with the same sensitive attributes (e.g., genders) have similar patterns and news recommendation models can easily capture these patterns. It may lead to some biases related to sensitive user attributes in the recommendation results, e.g., always recommending sports news to male users, which is unfair since users may not receive diverse news information. In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes. In our approach, we propose to decompose the user interest model into two components. One component aims to learn a bias-aware user embedding that captures the bias information on sensitive user attributes, and the other aims to learn a bias-free user embedding that only encodes attribute-independent user interest information for fairness-aware news recommendation. In addition, we propose to apply an attribute prediction task to the bias-aware user embedding to enhance its ability on bias modeling, and we apply adversarial learning to the bias-free user embedding to remove the bias information from it. Moreover, we propose an orthogonality regularization method to encourage the bias-free user embeddings to be orthogonal to the bias-aware one to better distinguish the bias-free user embedding from the bias-aware one. For fairness-aware news ranking, we only use the bias-free user embedding. Extensive experiments on benchmark dataset show that our approach can effectively improve fairness in news recommendation with minor performance loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源