TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Research Meeting 23033

Game Measures and Player Experience

( Jan 15 – Jan 18, 2023 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23033

Organizers
  • Cameron Browne (Maastricht University, NL)
  • Alena Denisova (University of York, GB)
  • Vanessa Volz (modl.ai - Copenhagen, DK)

Contact



Motivation

Most games are consciously designed with a specific experience or vision in mind. Games are commonly designed for entertainment and competition purposes, but self-expression, social critique, targeted learning, knowledge discovery as well as physical and mental health are also valid design objectives. Determining whether an objective is fulfilled is often quite difficult due to the complexity of modern games and the variability of human responses. For this reason, games are commonly play-tested before being published. However, play-tests are expensive and time-consuming and not all aspects of the game can be evaluated to the full extent before it being published.

There is thus a need for more concentrated and systematic work on evaluating / characterising games, its artifacts as well as player experience. Researchers have proposed approaches intended to assist game designers using methods from the field of artificial and computational intelligence (AI and CI, respectively). Still, to our knowledge, there is a surprising lack of generality and validation regarding these methods, even in scientific publications on game design. No central repository for methods currently exists. We thus propose to organise efforts towards the development, analysis, and dissemination of these evaluation methods through an informal interdisciplinary research network - kickstarted by this seminar.

Copyright Cameron Browne, Alena Denisova, and Vanessa Volz

Classification
  • Artificial Intelligence
  • Human-Computer Interaction
  • Machine Learning