TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 19511

Artificial and Computational Intelligence in Games: Revolutions in Computational Game AI

( 15. Dec – 20. Dec, 2019 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/19511

Organisatoren

Kontakt


Impacts

Motivation

The 2016 success of Google DeepMind's AlphaGo, which defeated the Go world champion, and its follow-up program AlphaZero, has sparked a renewed interest of the general public in computational game playing. Moreover, game AI researchers build upon these results to construct stronger game AI implementations. While there is high enthusiasm for the rapid advances to the state-of-the-art in game AI, most researchers realize that they do not suffice to solve many of the challenges in game AI which have been recognized for decades. This Dagstuhl Seminar aims at getting a clear view on the unsolved problems in game AI, determining which problems remain outside the reach of the state-of-the-art, and coming up with novel approaches to game AI construction to deal with these unsolved problems.

Among the challenges that the seminar will tackle are the following:


  • Determining the limitations of MCTS and deep learning for computational game playing
  • Defining more appropriate game complexity measures, which accurately reflect the difficulty of creating an AI that achieves superhuman play
  • Learning game playing under adverse conditions, such as imperfect information, continuous action spaces, and deceptive rewards
  • Implementing computational game AI for games with 3 or more players
  • Implementing AI for games with open-ended action spaces, such as interactive fiction and table-top roleplaying games
  • Going beyond MCTS in General game playing
  • General video game playing
  • Computation for human-like play

The seminar will consist partly of discussion groups, and partly of practical work on implementing AI. Apart from a daily round-up meeting, plenary talks will be kept to a minimum. The seminar welcomes both academic researchers and industry practitioners.

Copyright Jialin Liu, Tom Schaul, Pieter Spronck, and Julian Togelius

Press Room

Press Reviews


Summary

The past decade has seen a rapid advent of new technologies in computational game playing. For a long time, artificial intelligence (AI) for 2-player deterministic board games was mainly implemented using tree search, employing the minimax algorithm with alpha-beta pruning, enhanced with some improvements which were often aimed at particular games. This approach worked well for most traditional games, but some games proved to be notoriously hard to tackle in this way. The textbook example of games for which regular tree search is inadequate, is Go.

Ten years ago, the novel technique of Monte Carlo Tree Search (MCTS) became popular, as it was shown that using MCTS, the quality of AI for Go improved significantly, albeit not yet to the level of top-level human players. Many experts predicted that around 2030 Go AI would surpass human-level play. Much to the surprise of many, however, already in 2016 Google's AlphaGo defeated the human world champion in Go, using a combination of MCTS and deep convolutional networks to evaluate Go board positions and perform move selection. The networks were trained using millions of examples of human play, combined with self-play. A short while later, it was demonstrated with AlphaZero that self-play by itself suffices to train the networks to reach the necessary quality.

There is a long history of research into computational game AI for 2-player deterministic board games. However, since the turn of the century computational techniques have also been applied to games of a different nature, such as games for 3 or more players, games with imperfect information, and video games. Such games bring their own challenges, and often need very different approaches for creating game AI. Nevertheless, computational techniques may be applicable. Recent successes have been achieved in the playing of Poker (multiple players, imperfect information) and DotA (team-based video game). Deep learning techniques have been used to teach a game AI to play old Atari video games, and the highly complex game Doom, by only observing the screen.

These computational approaches to AI game playing have been highly successful, and have caused great enthusiasm in researchers and laymen alike. However, while they have opened up new possibilities for implementing strong game AI, they are definitely not the one-size-fits-all solution for all problems in computational game playing. The aim of the seminar was to build upon the foundations laid by the state-of-the-art in computational game playing, and (1) identify for which game AI problems the current state-of-the-art is inadequate or unsuitable, including the reasons why; (2) propose and investigate which improvements to the state-of-the-art may open up ways to apply it to a wider range of game AI problems; and (3) form ideas on which novel techniques may be employed to solve problems for which the current state-of-the-art is simply not suitable.

For the purpose of the seminar, a "game" is considered any simulated environment in which decisions can be taken in order to achieve a particular goal. This includes board games, card games, video games, simulations, and VR/AR applications. Decisions in a game are taken by "players." In multi-player games, the goal is usually to "win" from other players, by reaching a pre-defined victory condition before any other player manages to do so. "Game AI" is a computer-controlled player. Good game AI takes decisions which are highly effective in achieving the goal. Cooperative games are also of interest, where the aim is for the players to work together to share in a victory. We wish to point out that games are often a reflection of some aspects of the real world, and allow investigating those aspects in a risk-free environment - good solutions for problems found in games may therefore have immediate applications in the real world.

The following is a (non-exhaustive) list of challenges to game AI which are hard to deal with using the current state-of-the-art. These challenges formed the basis for the discussions and investigations of the seminar.

  • Determining the limitations of MCTS and deep learning for computational game playing: The state-of-the-art in computational game playing encompasses Monte-Carlo Tree Search (MCTS) and deep convolutional networks to store game information. The recent successes of this approach in Go have made MCTS and deep learning the "go-to" techniques for implementing game AI. However, these techniques have many inherent limitations. MCTS needs extraordinary amounts of computer power and is therefore very expensive to use. While it can be parallelized easily, just adding more computer power has diminishing pay-offs. Moreover, there are many games for which MCTS clearly is not a suitable approach, for instance, games with a large branching factor where it is hard to come up with heuristics which pinpoint the branches which are most likely to contain the strong moves. As for deep learning, now that the early enthusiasm has waned a little, the first criticisms of it, which explain its many limitations, are already being published. Gaining insight into the limitations of MCTS and deep learning for game AI implementation will allow us to distinguish those games for which these techniques may be employed for strong game playing from those games for which different approaches are needed.
  • Defining more appropriate game complexity measures: Game complexity is a measure which is supposed to indicate how difficult it is to implement game AI. It is usually expressed as the number of possible game states in base log_{10}. Beyond a complexity of 100 (10^{100} game states), it is highly unlikely that a game will ever be "solved," i.e., will never be played perfectly. Researchers therefore aim for superhuman rather than perfect play. For a long time Go was considered the pinnacle of complexity in game playing, boasting a game complexity of 360. However, in the game AI domain, games have been researched with a much higher game complexity than Go. Typical examples of such games are:
    • Arimaa, a 2-player deterministic, perfect-information board game with a game complexity of 402.
    • Stratego, a 2-player deterministic, imperfect-information board game with a game complexity of 535.
    • StarCraft, a typical Real-Time-Strategy video game, with a varying game-tree complexity (depending on the parameters of the scenario) which is measured in the tens of thousands.
  • Learning game playing under adverse conditions: In recent years, most research into game AI has moved towards "learning to play" rather than "implementing an algorithm." Game AI can learn from observing examples of human play (provided a large enough dataset is available) or from self-play. Such learning has lead to strong results in some cases, but in many cases fails under adverse conditions. Examples of such conditions are:
    • Imperfect information, i.e., the results of decisions of the AI depending partly on unknown data.
    • Continuous action spaces, i.e., the AI in principle being allowed to take an unlimited number of decisions in a small time period; thus, an AI not only has to decide what actions it wants to take, but also how many and with which intervals.
    • Deceptive rewards, i.e., situations in which positive results achieved by the AI in the short term, in practice drive it away from the ultimate goal of the game in the long term.
    To implement learning AI for wider classes of games, approaches must be devised to deal with such adverse conditions in systematic ways.
  • Implementing computational game AI for games with 3 or more players: Most research into game AI is concerned with zero-sum 2-player games. The reason is obvious: in 2-player games, an objective ``best move'' always exists. With games that involve more than two players, which oppose each other, there often is no obvious "best move." For instance, if there are three players in the game, if two of those players band together, in general the third one will have a very hard time winning the game, even when taking into account that the other two are collaborating. The main problem is that when one player is obviously doing better than the other two, it is to the advantage of the other two to collaborate against the envisioned winner. Therefore, in games with three or more players, it may be advantageous not to play better than the other players, in order not to become the target of a collaborative assault of the opponents. A pessimistic perspective, where the AI assumes that the opponents will actually form a block, will in general lead to much worse play than a perspective wherein the AI tries to form collaborations itself. This means that the AI must incorporate in its reasoning the attitudes of the other players, for instance in the form of player models. The topic of AI for games of three or more players has been studied very little - a notable exception being the study of Poker, which is a relatively easy game in this respect considering the simplicity of the required player models and the very small state-space and game-tree complexities. Hundreds of thousands of games for three or more players exist, which by itself means that this challenge needs investigation. Moreover, when translating research findings to real-world challenges, the existence of more than two players is a given in most realistic situations.
  • Implementing AI for games with open-ended action spaces: Certain classes of games have so-called "open-ended action spaces," i.e., the number of possible actions is basically unlimited. One example of such a type of game is found in interactive fiction: these are puzzle games which the player controls by typing sentences in plain English. While each game only understands a limited set of verbs and nouns, the player is generally unaware of this list. Designers of such games aim to allow the player to give any English command which is reasonable in the circumstances described by the game. Another example of such a type of game is a tabletop role-playing game, in which players are allowed to perform any action at all, and a game master determines (within boundaries of a complex ruleset) what the result of the action is. Creating an AI for either a game master or a player of such a game requires the AI to have at least a basic understanding of the game world to be successful. In practice, studies into game AI focus almost exclusively on games where the action spaces are closed, which makes regular learning algorithms applicable. For games with open-ended action spaces, as of yet no successful approaches have been found.
  • General Game Playing: In the domain of General Game Playing (GGP), games are defined by a set of rules, specified in a General Game Description Language (GGDL). The goal of researchers in this domain is to create an artificial intelligence which is able to play such a game, based on only the rules. Yearly competitions are held where researchers pose their AIs against each other in games which are unknown to them at the start of the competition. The state-of-the-art in such competitions is using MCTS, enhanced according to some general assumptions on the types of games that need to be played. This approach is unsurprising, as MCTS does not require knowledge of the game in question in order to do reasonably well. In fact, this approach is so strong and so easy to implement that all competitors use it. The danger of the success of MCTS for GGP is that the research in this area gets stuck at a dead end -- the same happened with the research into traditional game AI when for decades researchers only worked on small improvements to minimax and alpha-beta pruning, until MCTS came around to shake things up. It is highly unlikely that a blind and ostensibly "stupid" approach such as MCTS is the end-all of GGP AI implementations. It is therefore of particular interest to investigate novel approaches to GGP, which are not MCTS-based.
  • General Video Game Playing: General Video Game Playing (GVGP) aims at designing an AI agent which is capable of successfully playing previously-unknown video games without human intervention. In the General Video Game AI (GVGAI) framework, video games are defined by a set of rules, sprites and levels, specified in a Video Game Description Language (VGDL). The VGDL was initially proposed and designed at the 2012 Dagstuhl Seminar on Artificial and Computational Intelligence in Games. The GVGAI framework has been expanded to five different competition tracks: (1) single-player planning, (2) two-player planning, (3) learning (in which no forward model is given), (4) level generation and (5) rule generation. In the planning tracks a forward model of every game is available; MCTS has been the state-of-the-art algorithm in these tracks. However, MCTS is not applicable to the learning track as no forward model is given and thus no simulation of game playing is possible. Deep reinforcement learning is a potential approach for the GVGAI learning track, but has not been investigated yet. Other methods might have potential too. Determining the applicability of different methods to the creation of GVGAI is a novel and topical challenge. Of particular interest in this respect is the creation of an AI for the domain of general Real-Time-Strategy (RTS) games.
  • Computation for human-like play: Virtually all research into computational game AI focuses on building a game-playing AI which is as strong as possible. Strength can objectively be measured by pitting different AIs against each other. In video-game AI research, it has been recognized that playing strength is, in general, not a major goal -- instead, much research in video game AI is aimed at making the AI play in an entertaining, interesting, or human-like manner. MCTS is notoriously unsuitable for selecting moves that are human-like, as it is simply based on finding the best outcome for the game as a whole. However, in situations where humans play against an AI, whether it is for entertainment or training, it is desirable that not only the best moves can be played by the AI, but also those moves which are interesting to explore or are on par with how a human might play. Almost no research has yet been done into computational human-like AI, which makes it a worthy challenge to take on.

The Dagstuhl seminar brought together computer scientists and industry experts with the common goals of gaining a deeper understanding of computational game AI, in particular to determine the limitations to the state of the art, to find new uses for the state-of-the-art, to explore new problems in the domain of computational game AI, and to investigate novel approaches to implementing computational game AI. Industry experts came not only from companies which specifically work in game AI research, but also from companies which use game AI in their products.

During the seminar we not only had discussions which investigate the topics theoretically, but also spent part of the seminar on trying to achieve practical results. We did the same in the 2015 and 2017 seminars, which was met with great enthusiasm and led to some strong follow-ups. As in the previous seminars, these practical sessions were partly to test out new ideas, and partly competition-based, where different approaches were used to implement AI for new problems, which were then compared to each other by running a competition.

What was new for this particular seminar, is that we held expert talks during some of the evenings. These started with one or two experts giving a longer talk (between half an hour and an hour-and-a-half) on one of their specialisms, followed by a longer Q&A session and a discussion. One evening was spent this way on using modern communication media to inform people about research (Tommy Thompson), one evening was spent on the details of DeepMind's AlphaStar (Tom Schaul), and one evening was spent on advanced search techniques in board games (Olivier Teytaud and Tristan Cazanave). These evenings were greatly appreciated by participants, and should remain part of this series of seminars.

Reports on the working groups are presented on the following pages. Many of these working groups have lead to collaborations, which will lead to papers to be published at conferences and in journals in the coming year. All in all, the general impression was that the participants and the organizers found the seminar a great success, and an inspiration for future research.

Copyright Jialin Liu, Tom Schaul, Pieter Spronck, and Julian Togelius

Teilnehmer
  • Dan Ashlock (University of Guelph, CA) [dblp]
  • Yngvi Björnsson (Reykjavik University, IS) [dblp]
  • Alan Blair (UNSW - Sydney, AU) [dblp]
  • Bruno Bouzy (Nukkai - Paris, FR) [dblp]
  • Cameron Browne (Maastricht University, NL) [dblp]
  • Michael Buro (University of Alberta - Edmonton, CA) [dblp]
  • Duygu Cakmak (Creative Assembly - Horsham, GB) [dblp]
  • Tristan Cazanave (University Paris-Dauphine, FR) [dblp]
  • Alex J. Champandard (nucl.ai - Wien, AT) [dblp]
  • Michael Cook (Queen Mary University of London, GB) [dblp]
  • Peter I. Cowling (University of York, GB) [dblp]
  • Jakob Foerster (University of Toronto, CA)
  • Raluca D. Gaina (Queen Mary University of London, GB) [dblp]
  • Katja Hofmann (Microsoft Research - Cambridge, GB) [dblp]
  • Jialin Liu (Southern Univ. of Science and Technology - Shenzen, CN) [dblp]
  • Setareh Maghsudi (TU Berlin, DE) [dblp]
  • Nantas Nardelli (University of Oxford, GB)
  • Mark J. Nelson (American University - Washington, US) [dblp]
  • Diego Perez Liebana (Queen Mary University of London, GB) [dblp]
  • Mike Preuß (Leiden University, NL) [dblp]
  • Sebastian Risi (IT University of Copenhagen, DK) [dblp]
  • Günter Rudolph (TU Dortmund, DE) [dblp]
  • Tómas Philip Rúnarsson (University of Iceland - Reykjavik, IS) [dblp]
  • Christoph Salge (New York University, US)
  • Spyridon Samothrakis (University of Essex - Colchester, GB) [dblp]
  • Tom Schaul (Google DeepMind - London, GB) [dblp]
  • Jacob Schrum (Southwestern University - Georgetown, US) [dblp]
  • Emily Short (Spirit AI - Oxford, GB) [dblp]
  • Chiara F. Sironi (Maastricht University, NL) [dblp]
  • Pieter Spronck (Tilburg University, NL) [dblp]
  • Nathan Sturtevant (University of Alberta - Edmonton, CA) [dblp]
  • Olivier Teytaud (Facebook - Paris, FR) [dblp]
  • Tommy Thompson (AI and Games - London, GB) [dblp]
  • Julian Togelius (New York University, US) [dblp]
  • Vanessa Volz (modl.ai - Copenhagen, DK) [dblp]
  • Hui Wang (Leiden University, NL) [dblp]
  • Mark Winands (Maastricht University, NL) [dblp]
  • Georgios N. Yannakakis (University of Malta - Msida, MT) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 12191: Artificial and Computational Intelligence in Games (2012-05-06 - 2012-05-11) (Details)
  • Dagstuhl-Seminar 15051: Artificial and Computational Intelligence in Games: Integration (2015-01-25 - 2015-01-30) (Details)
  • Dagstuhl-Seminar 17471: Artificial and Computational Intelligence in Games: AI-Driven Game Design (2017-11-19 - 2017-11-24) (Details)
  • Dagstuhl-Seminar 22251: Human-Game AI Interaction (2022-06-19 - 2022-06-24) (Details)
  • Dagstuhl-Seminar 24261: Computational Creativity for Game Development (2024-06-23 - 2024-06-28) (Details)

Klassifikation
  • artificial intelligence / robotics
  • data structures / algorithms / complexity
  • optimization / scheduling

Schlagworte
  • computational intelligence
  • artificial intelligence
  • game theory
  • optimization
  • games