TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24211

Evaluation Perspectives of Recommender Systems: Driving Research and Education

( May 20 – May 24, 2024 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24211

Organizers

Contact

Dagstuhl Seminar Wiki

Shared Documents

Schedule
  • Upload (Use personal credentials as created in DOOR to log in)

Motivation

Evaluation is an important cornerstone in the process of researching, developing, and deploying recommender systems. This Dagstuhl Seminar aims to shed light on the different and potentially diverging or contradictory perspectives on the evaluation of recommender systems. Building on the discussions and outcomes of the PERSPECTIVES workshop series held at ACM RecSys 2021-2023, the seminar will bring together academia and industry to critically reflect on the state of the evaluation of recommender systems and create a setting for development and growth.

While recommender systems is largely an applied field, their evaluation builds on and intersects theories from information retrieval, machine learning, and human-computer interaction. Historically, the theories and evaluation approaches in these fields are very different. Thoroughly evaluating recommender systems requires integrating all perspectives. Hence, this seminar will bring together experts from these fields and serve as a vehicle for discussing and developing the state-of-the-art and practice of evaluating recommender systems. The seminar will set the ground for developing recommender systems evaluation metrics, methods, and practices through collaborations and discussions between participants from diverse backgrounds, e.g., academic and industry researchers, industry practitioners, senior and junior. We emphasize the importance of getting and keeping the big picture of a recommender system’s performance in its context of use, for which it is ultimate to incorporate the technical and the human element.

We will set the basis for the next generation of researchers, apt to evaluate and advance recommender systems thoroughly

Copyright Christine Bauer, Alan Said, and Eva Zangerle

Participants

Classification
  • Human-Computer Interaction
  • Information Retrieval
  • Machine Learning

Keywords
  • Recommender Systems
  • Evaluation
  • Information Retrieval
  • User Interaction
  • Intelligent Systems