Adversarial Bandits With Multi-User Delayed Feedback: Theory and Application

Yandi Li,Jianxiong Guo,Yupeng Li,Tian Wang, Weijia Jia

IEEE Transactions on Mobile Computing(2024)

引用 0|浏览6
暂无评分
摘要
The multi-armed bandit (MAB) models have attracted significant research attention due to their applicability and effectiveness in various real-world scenarios such as resource allocation in uncertain environments, online advertising, and dynamic pricing. As an important branch, the adversarial multi-armed bandit problems with delayed feedback have been proposed and studied by many researchers recently where a conceptual adversary strategically selects the reward distributions associated with each arm to challenge the learning algorithm and the agent experiences a bunch of delays in receiving the corresponding reward feedback from different users after taking an action on them. However, the existing models restrict the feedback to being generated from only one user, which makes models inapplicable to the prevailing scenarios of multiple users (e.g. ad recommendation for a group of users). In this paper, we consider that the delayed feedback results are from multiple users and are unrestricted on internal distribution while the feedback delay is arbitrary and unknown to the player in advance. Also, for different users in a round, the delays in feedback have no assumption of latent correlation. Thus, we formulate an adversarial multi-armed bandit problem with multi-user delayed feedback and design a modified EXP3 algorithm named MUD-EXP3, which makes a decision at each round by considering the importance-weighted estimator of the received feedback from different users. On the premise of known terminal round index $T$ , the number of users $M$ , the number of arms $N$ , and upper bound of delay $d_{max}$ , we prove a regret of $\mathcal {O}(\sqrt{TM^{2}\ln {N}(N\mathrm{e}+4d_{max})})$ . Furthermore, for the more common case of unknown $T$ , an adaptive algorithm named AMUD-EXP3 is proposed with a sublinear regret concerning $T$ . Finally, extensive experiments are conducted to indicate the correctness and effectiveness of our algorithms in dynamic environments.
更多
查看译文
关键词
Adversarial bandit,applications,EXP3,multi-user delayed feedback,online learning,regret analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要