Meta Exploration for Model-Agnostic Reinforcement Learning

Swaminathan Gurumurthy, Sumit Kumar skumar

semanticscholar(2019)

Cited 0|Views1
No score
Abstract
Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches towards finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and postupdate (exploitation) policies are quite different. Instead of sticking to methods for more sufficient policy adaption, we propose to explicitly model a separate exploration policy with task-specific variables z for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We also use DiCE operator to ensure that the gradients of z can be properly back-propagated. We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined