Weak Signal Asymptotics for Sequentially Randomized Experiments

Xu Kuang, Stefan Wagera

MANAGEMENT SCIENCE(2023)

引用 0|浏览17
暂无评分
摘要
We use the lens of weak signal asymptotics to study a class of sequentially randomized experiments, including those that arise in solving multiarmed bandit problems. In an experiment with n time steps, we let the mean reward gaps between actions scale to the order 1= ffififfi root n to preserve the difficulty of the learning task as n grows. In this regime, we show that the sample paths of a class of sequentially randomized experiments-adapted to this scaling regime and with arm selection probabilities that vary continuously with state-converge weakly to a diffusion limit, given as the solution to a stochastic differential equation. The diffusion limit enables us to derive refined, instance-specific characterization of stochastic dynamics and to obtain several insights on the regret and belief evolution of a number of sequential experiments including Thompson sampling (but not upper confidence bound, which does not satisfy our continuity assumption). We show that all sequential experiments whose randomization probabilities have a Lipschitz-continuous dependence on the observed data suffer from suboptimal regret performance when the reward gaps are relatively large. Conversely, we find that a version of Thompson sampling with an asymptotically uninformative prior variance achieves near-optimal instance specific regret scaling, including with large reward gaps, but these good regret properties come at the cost of highly unstable posterior beliefs.
更多
查看译文
关键词
diffusion approximation,multiarmed bandit,Thompson sampling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要