Monte Carlo Continual Resolving for Online Strategy Computation in Imperfect Information Games

AAMAS '19: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems(2019)

引用 1|浏览86
暂无评分
摘要
Online game playing algorithms produce high-quality strategies with a fraction of memory and computation required by their offline alternatives. Continual Resolving (CR) is a recent theoretically sound approach to online game playing that has been used to outperform human professionals in poker. However, parts of the algorithm were specific to poker, which enjoys many properties not shared by other imperfect information games. We present a domain-independent formulation of CR applicable to any two-player zero-sum extensive-form games that works with an abstract resolving algorithm. We further describe and implement its Monte Carlo variant (MCCR) which uses Monte Carlo Counterfactual Regret Minimization (MCCFR) as a resolver. We prove the correctness of CR and show an $O(T^{-1/2})$-dependence of MCCR's exploitability on the computation time. Furthermore, we present an empirical comparison of MCCR with incremental tree building to Online Outcome Sampling and Information-set MCTS on several domains.
更多
查看译文
关键词
counterfactual regret minimization,resolving,imperfect information,Monte Carlo,online play,extensive-form games,Nash equilibrium
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要