Provably Efficient Partially Observable Risk-Sensitive Reinforcement Learning with Hindsight Observation
CoRR(2024)
摘要
This work pioneers regret analysis of risk-sensitive reinforcement learning
in partially observable environments with hindsight observation, addressing a
gap in theoretical exploration. We introduce a novel formulation that
integrates hindsight observations into a Partially Observable Markov Decision
Process (POMDP) framework, where the goal is to optimize accumulated reward
under the entropic risk measure. We develop the first provably efficient RL
algorithm tailored for this setting. We also prove by rigorous analysis that
our algorithm achieves polynomial regret
Õ(e^|γ|H-1/|γ|HH^2√(KHS^2OA)),
which outperforms or matches existing upper bounds when the model degenerates
to risk-neutral or fully observable settings. We adopt the method of
change-of-measure and develop a novel analytical tool of beta vectors to
streamline mathematical derivations. These techniques are of particular
interest to the theoretical study of reinforcement learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要