January 30 – February 4 , 2005, Dagstuhl Seminar 05051
Probabilistic, Logical and Relational Learning - Towards a Synthesis
For support, please contact
One of the central open questions of artificial intelligence is concerned with combining expressive knowledge representation formalisms such as relational and first-order logic with principled probabilistic and statistical approaches to inference and learning. This combination is needed in order to face the challenge of real-world learning and data mining problems in which the data are complex and heterogeneous and we are interested in finding useful predictive and/or descriptive patterns.
In this context, the terms probabilistic and statistical refer to the use of probabilistic representations and reasoning mechanisms grounded in probability theory, such as Bayesian networks, hidden Markov models and probabilistic grammars and the use of statistical learning and inference techniques. Such representations have been successfully used across a wide range of applications and have resulted in a number of robust models for reasoning about uncertainty. The primary advantage of using probabilistic representations is that well-understood and principled statistical inference and learning algorithms exist.
The term learning refers to deriving the different aspects of the probabilistic model on the basis of data. Typically, one distinguishes various learning algorithms on the basis of the given data (fully or partially observable variables) or on the aspect being learned (the parameters of the probabilistic representation or the structure of the model). Statistical and Bayesian approaches provide a unified framework for learning a model, whether through model selection or explicitly modeling a distribution over the models.
The terms logical and relational refer to first order logical and relational representations such as those studied within the field of computational logic and database theory. The primary advantage of using such expressive representations is that it allows one to elegantly and naturally represent complex situations involving a variety of objects as well as relations among the objects, which is not possible using the simpler propositional or feature vector based representations. So, probabilistic, logical and relational learning aims at combining its three underlying constituents: statistical learning and probabilistic reasoning within logical or relational representations.
The goal of this seminar was to bring together the researchers interested in the area of statistical, logical and relational learning. This allowed the participants to explore the foundations, challenges and research opportunities raised by this important open problem in artificial intelligence.
This workshop brought together a signficant number of researchers from all over the world that are working on all aspects of probabilist, logical and relational learning. It was also the first workshop on this topic where there was sufficent time for indepth discusions, debates and working groups. It was exciting to see the progression through the week. It was clear that some common ground had been identified, yet this was just the start. There was a general feeling that the workshop was a success, and a lot of enthusiasm for a follow on workshop.