Dual constraints and adversarial learning for fair recommenders

Knowledge-Based Systems(2022)

引用 14|浏览11
暂无评分
摘要
Recommender systems, which are consist of common artificial intelligence technology, have a profound impact on the lifestyles of people. However, recent studies have demonstrated that recommender systems have fairness problems which means that some people with certain attributes are treated unfairly. A fair recommender means that users with different attributes achieve the same recommender accuracy. In particular, the recommender systems completely rely on users’ behavior data for preferences learning, which leads to a high probability of unfair problems because that the behavior data usually contains sensitive information of users. Unfortunately, there are a few studies exploring unfair problem in recommender systems. To alleviate this problem, we present a novel fairness-aware recommender with dual fairness constraints (FRFC) to improve fairness in recommendations and protect the user’s sensitive information from being exposed. This model has several advantages: one advantage is that an adversarial-based graph neural network (GNN) is proposed to prevent the target user being infected by sensitive features of neighbor users; another advantage is that two fairness constraints are proposed to solve the problems of adversarial classifier failures in whole data and unfair ranking losses. With this design, the FRFC model can effectively filter out users’ sensitive information and give users of different attributes the same training opportunities, which is helpful for making a fair recommendation. Finally, extensive experiments demonstrate that the proposed model can significantly improve the fairness of recommendation results.
更多
查看译文
关键词
Fair recommendation,Graph neural network,Recommender systems,Adversarial learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要