The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning

arxiv(2023)

引用 0|浏览0
暂无评分
摘要
Suppose we are given access to n independent samples from distribution μ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution ν. In this work we show that the optimal total variation distance as a function of n is given by Θ̃(D/f'(n)) over the class of all pairs ν,μ with a bounded f-divergence D_f(νμ)≤ D. Previously, this question was studied only for the case when the Radon-Nikodym derivative of ν with respect to μ is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded f-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.
更多
查看译文
关键词
approximate rejection sampling,sample complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要