Exploiting the Natural Exploration In Contextual Bandits.

arXiv (Cornell University)(2017)

引用 0|浏览28
暂无评分
摘要
The contextual bandit literature has traditionally focused on algorithms that address the exploration-exploitation trade-off. In particular, greedy policies that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy policies are desirable in many practical settings where exploration may be prohibitively costly or unethical (e.g. clinical trials). We prove that, for a general class of context distributions, the greedy policy benefits from a natural exploration obtained from the varying contexts and becomes asymptotically rate-optimal for the two-armed contextual bandit. Through simulations, we also demonstrate that these results generalize to more than two arms if the dimension of contexts is large enough. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy policy or to explore. We prove that this algorithm is asymptotically optimal without any additional assumptions on the distribution of contexts or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces experimentation and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling, UCB, or $\epsilon$-greedy.
更多
查看译文
关键词
contextual bandits,natural exploration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要