Combining Monte Carlo tree search and apprenticeship learning for capture the flag

IEEE Conference on Computational Intelligence and Games(2015)

引用 1|浏览12
暂无评分
摘要
In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.
更多
查看译文
关键词
Monte Carlo tree search,MCTS,apprenticeship learning,AL,agent control,competitive video games,upper confidence bounds for trees,UCT,capture the flag game,AI Sandbox CTF environment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要