Optimizing for the Future in Non-Stationary MDPs

Chandak Yash
Chandak Yash
Theocharous Georgios
Theocharous Georgios
Shankar Shiv
Shankar Shiv
Cited by: 1|Bibtex|Views19
Other Links: arxiv.org

Abstract:

Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process (MDP) is stationary. However, in many practical real-world applications, this assumption is often violated. We discuss how current methods can have inherent li...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments