Performance Evaluation Gaps In A Real-Time Strategy Game Between Human And Artificial Intelligence Players

IEEE ACCESS(2018)

引用 23|浏览54
暂无评分
摘要
Since 2010, annual StarCraft artificial intelligence (AI) competitions have promoted the development of successful AI players for complex real-time strategy games. In these competitions, AI players are ranked based on their win ratio over thousands of head-to-head matches. Although simple and easily implemented, this evaluation scheme may less adequately help develop more human-competitive AI players. In this paper, we recruited 45 human StarCraft players at different expertise levels (expert/medium/novice) and asked them to play against the 18 top AI players selected from the five years of competitions (2011-2015). The results show that the human evaluations of AI players differ substantially from the current standard evaluation and ranking method. In fact, from a human standpoint, there has been little progress in the quality of StarCraft AI players over the years. It is even possible that AI-only tournaments can lead to AIs being created that are unacceptable competitors for humans. This paper is the first to systematically explore the human evaluation of AI players, the evolution of AI players, and the differences between human perception and tournament-based evaluations. The discoveries from this paper can support AI developers in game companies and AI tournament organizers to better incorporate the perspective of human users into their AI systems.
更多
查看译文
关键词
Video game, Starcraft, game, artificial intelligence, game AI competition, human factor, human computer interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要