Adversarially Guided Actor-Critic

ICLR(2021)

引用 66|浏览247
暂无评分
摘要
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments. These methods consider a policy (the actor) and a value (the critic) whose respective losses are obtained using different motivations and approaches. We introduce a third protagonist, the adversary. While this adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor maximizes the log-probability difference between its action and that of the adversary in combination with maximizing expected rewards. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
更多
查看译文
关键词
actor-critic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要