Recurrent Natural Policy Gradient for POMDPs
CoRR(2024)
Abstract
In this paper, we study a natural policy gradient method based on recurrent
neural networks (RNNs) for partially-observable Markov decision processes,
whereby RNNs are used for policy parameterization and policy evaluation to
address curse of dimensionality in non-Markovian reinforcement learning. We
present finite-time and finite-width analyses for both the critic (recurrent
temporal difference learning), and correspondingly-operated recurrent natural
policy gradient method in the near-initialization regime. Our analysis
demonstrates the efficiency of RNNs for problems with short-term memory with
explicit bounds on the required network widths and sample complexity, and
points out the challenges in the case of long-term dependencies.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined