Contextual policy transfer in reinforcement learning domains via deep mixtures-of-experts.

UAI(2021)

引用 1|浏览23
暂无评分
摘要
In reinforcement learning, agents that consider the context, or current state, when selecting source policies for transfer have been shown to outperform context-free approaches. However, none of the existing approaches transfer knowledge contextually from model-based learners to a model-free learner. This could be useful, for instance, when source policies are intentionally learned on diverse simulations with plentiful data but transferred to a real-world setting with limited data. In this paper, we assume knowledge of estimated source task dynamics and policies, and common sub-goals but different dynamics. We introduce a novel deep mixture-of-experts formulation for learning state-dependent beliefs over source task dynamics that match the target dynamics using state trajectories collected from the target task. The mixture model is easy to interpret, demonstrates robustness to estimation errors in dynamics, and is compatible with most learning algorithms. We then show how this model can be incorporated into standard policy reuse frameworks, and demonstrate its effectiveness on benchmarks from OpenAI-Gym.
更多
查看译文
关键词
Reinforcement learning,Robustness (computer science),Mixture model,Reuse,Machine learning,Policy transfer,Computer science,Artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要