Provably Efficient Reinforcement Learning with General Value Function Approximation

arxiv(2020)

引用 62|浏览182
暂无评分
摘要
Value function approximation has demonstrated phenomenal empirical success in reinforcement learning (RL). Nevertheless, despite a handful of recent progress on developing theory for RL with linear function approximation, the understanding of general function approximation schemes largely remains missing. In this paper, we establish the first provable efficiently RL algorithm with general value function approximation. In particular, we show that if the value functions admit an approximation with a function class $\mathcal{F}$, our algorithm achieves a regret bound of $\widetilde{O}(\mathrm{poly}(dH)\sqrt{T})$ where $d$ is a complexity measure of $\mathcal{F}$, $H$ is the planning horizon, and $T$ is the number interactions with the environment. Our theory strictly generalizes recent progress on RL with linear function approximation and does not make explicit assumptions on the model of the environment. Moreover, our algorithm is model-free and provides a framework to justify algorithms used in practice.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要