Experimental Evidence that Empowerment May Drive Exploration in Sparse-Reward Environments

2021 IEEE International Conference on Development and Learning (ICDL)(2021)

引用 0|浏览2
暂无评分
摘要
Reinforcement Learning (RL) is known to be often unsuccessful in environments with sparse extrinsic rewards. A possible countermeasure is to endow RL agents with an intrinsic reward function, or ‘intrinsic motivation’, which rewards the agent based on certain features of the current sensor state. An intrinsic reward function based on the principle of empowerment assigns rewards proportional to the amount of control the agent has over its own sensors. We implemented a variation on a recently proposed intrinsically motivated agent, which we refer to as the ‘curious’ agent, and an empowerment-inspired agent. The former leverages sensor state encoding with a variational autoencoder, while the latter predicts the next sensor state via a variational information bottleneck. We compared the performance of both agents to that of an advantage actor-critic baseline in four sparse reward grid worlds. Both the empowerment agent and its curious competitor seem to benefit to similar extents from their intrinsic rewards. This provides some experimental support to the conjecture that empowerment can be used to drive exploration.
更多
查看译文
关键词
Advantage Actor-Critic,Empowerment,Information Bottleneck,Information Gain,Intrinsic Motivation,Reinforcement Learning,Variational Autoencoder
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要