Action Spaces in Deep Reinforcement Learning to Mimic Human Input Devices.

CoG(2019)

引用 3|浏览12
暂无评分
摘要
Enabling agents to generally play video games requires to implement a common action space that mimics human input devices like a gamepad. Such action spaces have to support concurrent discrete and continuous actions. To solve this problem, this work investigates three approaches to examine the application of concurrent discrete and continuous actions in Deep Reinforcement Learning (DRL). One approach implements a threshold to discretize a continuous action, while another one divides a continuous action into multiple discrete actions. The third approach creates a multiagent to combine both action kinds. These approaches are benchmarked by two novel environments. In the first environment (Shooting Birds) the goal of the agent is to accurately shoot birds by controlling a cross-hair. The second environment is a simplification of the game Beastly Rivals On-slaught, where the agent is in charge of its controlled character’s survival. Throughout multiple experiments, the bucket approach is recommended, because it is trained faster than the multiagent and is more stable than the threshold approach. Due to the contributions of this paper, consecutive work can start training agents using visual observations.
更多
查看译文
关键词
common action space,concurrent discrete actions,continuous actions,multiple discrete actions,bucket approach,threshold approach,deep reinforcement learning,human input devices,video games,multiagent,Beastly Rivals On-slaught game,Shooting Birds game
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要