Contextual Bandits for Multi-objective Recommender Systems.

BRACIS(2015)

引用 4|浏览56
暂无评分
摘要
The contextual bandit framework have become a popular solution for online interactive recommender systems. Traditionally, the literature in interactive recommender systems has been focused on recommendation accuracy. However, it has been increasingly recognized that accuracy is not enough as the only quality criteria. Thus, other concepts have been suggested to improve recommendation evaluation, such as diversity and novelty. Simultaneously considering multiple criteria in payoff functions leads to a multi-objective recommendation. In this paper, we model the payoff function of contextual bandits to considering accuracy, diversity and novelty simultaneously. We evaluated our proposed algorithm on the Yahoo! Front Page Module dataset that contains over 33 million events. Results showed that: (a) we are able to improve recommendation quality when equally considering all objectives, and (b) we allow for adjusting the compromise between accuracy, diversity and novelty, so that recommendation emphasis can be adjusted according to the needs of different users.
更多
查看译文
关键词
Online Recommender Systems, Multi-armed Bandits, Multi-objective
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要