Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation
arxiv(2024)
摘要
We study infinite-horizon average-reward Markov decision processes (AMDPs) in
the context of general function approximation. Specifically, we propose a novel
algorithmic framework named Local-fitted Optimization with OPtimism (LOOP),
which incorporates both model-based and value-based incarnations. In
particular, LOOP features a novel construction of confidence sets and a
low-switching policy updating scheme, which are tailored to the average-reward
and function approximation setting. Moreover, for AMDPs, we propose a novel
complexity measure – average-reward generalized eluder coefficient (AGEC) –
which captures the challenge of exploration in AMDPs with general function
approximation. Such a complexity measure encompasses almost all previously
known tractable AMDP models, such as linear AMDPs and linear mixture AMDPs, and
also includes newly identified cases such as kernel AMDPs and AMDPs with
Bellman eluder dimensions. Using AGEC, we prove that LOOP achieves a sublinear
𝒪̃(poly(d, sp(V^*)) √(Tβ) )
regret, where d and β correspond to AGEC and log-covering number of the
hypothesis class respectively, sp(V^*) is the span of the optimal
state bias function, T denotes the number of steps, and 𝒪̃
(·) omits logarithmic factors. When specialized to concrete AMDP models,
our regret bounds are comparable to those established by the existing
algorithms designed specifically for these special cases. To the best of our
knowledge, this paper presents the first comprehensive theoretical framework
capable of handling nearly all AMDPs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要