02.03.14 - 07.03.14, Seminar 14101

Preference Learning

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.

Motivation

The topic of "preferences" has recently attracted considerable attention in Artificial Intelligence (AI) research, notably in fields such as autonomous agents, non-monotonic reasoning, constraint satisfaction, planning, and qualitative decision theory. Preferences provide a means for specifying desires in a declarative way, which is a point of critical importance for AI. Drawing on past research on knowledge representation and reasoning, AI offers qualitative and symbolic methods for treating preferences that can reasonably complement hitherto existing approaches from other fields, such as decision theory. Needless to say, however, the acquisition of preference information is not always an easy task. Therefore, not only are modeling languages and suitable representation formalisms needed, but also methods for the automatic learning, discovery, modeling, and adaptation of preferences.

It is hence hardly surprising that methods for learning and constructing preference models from explicit or implicit preference information and feedback are among the very recent research trends in disciplines such as machine learning, knowledge discovery, information retrieval, statistics, social choice theory, multiple criteria decision making, decision under risk and uncertainty, operations research, and others. In all these areas, considerable progress has been made on the representation and the automated learning of preference models. The goal of this Dagstuhl Seminar is to bring together international researchers in these areas, thereby stimulating the interaction between these fields with the goal of advancing the state-of-the-art in preference learning. Topics of interest to the seminar include, but are not limited to

  • quantitative and qualitative approaches to modeling preference information;
  • preference extraction, mining, and elicitation;
  • methodological foundations of preference learning (learning to rank, ordered classification, active learning, learning monotone models, ...)
  • inference and reasoning about preferences;
  • mathematical methods for ranking;
  • applications of preference learning (web search, information retrieval, electronic commerce, games, personalization, recommender systems, ...)

The workshop program will consist of presentations of ongoing work by the participants, and there will be ample time for formal and informal discussions. We anticipate several benefits of the seminar. First, we are of course interested in advancing the state-of-the-art in preference learning from a theoretical, methodological as well as application-oriented point of view. Apart from that, however, we also hope that the seminar will help to further consolidate this research field, which is still in an early stage of its development. Last but not least, our goal is to connect preference learning with closely related fields and research communities. Ideally, people from different communities will share their experiences and perspectives, and will familiarize themselves with each other's techniques and terminologies.