Evaluating state-space abstractions in extensive-form games

AAMAS(2013)

引用 93|浏览47
暂无评分
摘要
Efficient algorithms exist for finding optimal policies in extensive-form games. However, human-scale problems are typically so large that this computation remains infeasible with modern computing resources. State-space abstraction techniques allow for the derivation of a smaller and strategically similar abstract domain, in which an optimal strategy can be computed and then used as a suboptimal strategy in the real domain. In this paper, we consider the task of evaluating the quality of an abstraction, independent of a specific abstract strategy. In particular, we use a recent metric for abstraction quality and examine imperfect recall abstractions, in which agents "forget" previously observed information to focus the abstraction effort on more recent and relevant state information. We present experimental results in the domain of Texas hold'em poker that validate the use of distribution-aware abstractions over expectation-based approaches, demonstrate that the new metric better predicts tournament performance, and show that abstractions built using imperfect recall outperform those built using perfect recall in terms of both exploitability and one-on-one play.
更多
查看译文
关键词
similar abstract domain,real domain,perfect recall,abstraction quality,state-space abstraction,state-space abstraction technique,imperfect recall abstraction,abstraction effort,distribution-aware abstraction,extensive-form game,optimal strategy,imperfect recall
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要