Decoupling Exploration And Exploitation For Meta-Reinforcement Learning Without Sacrifices

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139(2021)

引用 60|浏览41192
暂无评分
摘要
The goal of meta-reinforcement learning (metaRL) is to build agents that can quickly learn new tasks by leveraging prior experience on related tasks. Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task. In principle, optimal exploration and exploitation can be learned end-to-end by simply maximizing task performance. However, such meta-RL approaches struggle with local optima due to a chicken-and-egg problem: learning to explore requires good exploitation to gauge the exploration's utility, but learning to exploit requires information gathered via exploration. Optimizing separate objectives for exploration and exploitation can avoid this problem, but prior meta-RL exploration objectives yield suboptimal policies that gather information irrelevant to the task. We alleviate both concerns by constructing an exploitation objective that automatically identifies task-relevant information and an exploration objective to recover only this information. This avoids local optima in end-to-end training, without sacrificing optimal exploration. Empirically, DREAM substantially outperforms existing approaches on complex meta-RL problems, such as sparse-reward 3D visual navigation. Videos of DREAM: https://eiliu.github.io/dream/
更多
查看译文
关键词
Reinforcement learning,Exploit,Local optimum,Machine learning,Computer science,Decoupling (cosmology),Artificial intelligence,Visual navigation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要