Dagstuhl Seminar 21192
Approaches and Applications of Inductive Programming
( May 09 – May 12, 2021 )
- Andrew Cropper (University of Oxford, GB)
- Luc De Raedt (KU Leuven, BE)
- Richard Evans (DeepMind - London, GB)
- Ute Schmid (Universität Bamberg, DE)
- Michael Gerke (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Inductive programming addresses the problem of learning programs from incomplete specifications, typically from input/output examples. Researchers on this topic have backgrounds in diverse areas of computer science, namely in machine learning, artificial intelligence, declarative programming, program verification, and software engineering. Furthermore, inductive programming is of interest to researchers in cognitive science, working on computational models of inductive learning, and to researchers in education, especially in cognitive tutoring. A breakthrough from basic research to applications for the mass-market was achieved by applying inductive programming techniques to programming by examples support of end-users for Microsoft Excel (Flashfill).
This seminar is a continuation of Dagstuhl Seminars 13502, 15442, 17382, and 19202. In every installation, a specific topic has been in focus. The first focus was been on bringing together different areas of research and applications of inductive programming. The second focus was on the in-depth coverage of algorithmic methods and relations to cognitive modelling. The third focus was on application areas such as data cleansing, teaching programming, and interactive training. The fourth focus was exploring the potential of inductive programming for explainable artificial intelligence (XAI), especially combinations with (deep) neural networks and with data science.
Based on the results of the fourth seminar, the focus of this fifth seminar will be on inductive programming as a powerful approach for explainable artificial intelligence (`IP for XAI'). Since inductive programming is a highly expressive approach to interpretable machine learning which allows us to naturally combine reasoning and learning, it offers promising methods for explanation generation, especially in combination with (deep) neural networks and with data science. For many real-world applications, it is necessary or recommendable to involve the human as a teacher and judge for the machine-learned models. Therefore, a second focus of the seminar is to explore inductive programming in the context of new approaches to interactive machine learning and in relation to cognitive science research on human learning.
Expected outcomes of the seminar are:
- Identifying the specific contributions of inductive programming to machine learning research and applications of machine learning, especially identifying problems for which inductive programming approaches are more suited than standard machine learning approaches, including deep learning. The focus is on possibilities of combining (deep) neural approaches and (symbolic) inductive programming, especially with respect to new approaches to the comprehensibility of machine-learned models and on explainable AI.
- Discussing current applications of inductive programming in end-user programming and programming education and identifying further relevant areas of application.
- Strengthening the relation of inductive programming and data science, especially with respect to data cleansing and data wrangling.
- Establishing stronger relations between cognitive science research on inductive learning and inductive programming under the label of human-like computation and making use of cognitive principles in interactive machine learning to keep humans in the loop of decision making.
The goal of Inductive Programming (IP) is to provide methods for induction of computer programs from data. Specifically, IP is the automated (or semi-automated) generation of a computer program from an incomplete information, such as input-output examples, demonstrations, or computation traces.IP offers powerful approaches to learning from relational data and to learning from observations in the context of autonomous intelligent agents. IP is a form of machine learning, because an IP system should perform better given more data (i.e. more examples or experience). However, in contrast to standard ML approaches, IP approaches typically only need a small number of training examples. Furthermore, induced hypotheses are typically represented as logic or functional programs, and can therefore be inspected by a human. In that sense, IP is a type of interpretable machine learning which goes beyond the expressivity of other approaches of rule learning such as decision tree algorithms. IP is also a form of program synthesis. It complements deductive and transformational approaches. When specific algorithm details are difficult to determine, IP can be used to generate candidate programs from either user-provided data, such as test cases, or from data automatically derived from a formal specification. Most relevant application areas of IP techniques is end-user programming and data wrangling.
This seminar has been the fifth in a series - building on seminars 13502, 15442, 17383, and 19202. In the wake of the recent interest in deep learning approaches, mostly for end-to-end learning, it has been recognized that for practical applications, especially in critical domains, data-intensive blackbox machine learning must be complemented with methods which can help to overcome problems with data quality, missing or errouneous labeling of training data, as well as providing transparency and comprehensibility of learned models. To address these requirements, on the one hand, explainable artificial intelligence (XAI) emerged as a new area of research and on the other hand, there is a new interest in bringing together learning and reasoning. These two areas of research are in the focus of the 2021 seminar. Futhermore, recent developments to scale up IP methods to be more applicable to complex real world domains has been taken into account. Based on outcomes of the fourth seminar (19202), the potential of IP as powerful approach for explainable artificial intelligence ("IP for XAI") has been be elaborated. Bringing together IP methods and deep learning approaches contributes to neural-symbolic intergration research. While two years ago (seminar 19202) focus has been on IP as interpretable surrogate model, in the 2021 seminar explainability of different addressees of explanations and their need to different types of explanations (e.g. verbal or example-based) are considered. For many real world applications, it is necessary to involve the human as teacher and judge for the machine learned models. Therefore, a further topic of the seminar has been to explore IP in the context of new approaches to interactive ML and their applications to automating data science and joint human-computer decision making.
- Lun Ai (Imperial College London, GB)
- Nada Amin (Harvard University - Allston, US) [dblp]
- Martin Atzmüller (Universität Osnabrück, DE) [dblp]
- Feryal Behbahani (DeepMind - London, GB)
- Thea Behrens (TU Darmstadt, DE)
- Andrew Cropper (University of Oxford, GB) [dblp]
- James Cussens (University of Bristol, GB) [dblp]
- Artur d'Avila Garcez (City - University of London, GB) [dblp]
- Wang-Zhou Dai (Imperial College London, GB)
- Luc De Raedt (KU Leuven, BE) [dblp]
- Thomas Demeester (Ghent University, BE)
- Amit Dhurandhar (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
- Sebastijan Dumancic (KU Leuven, BE)
- Kevin Ellis (Cornell University - Ithaca, US) [dblp]
- Richard Evans (DeepMind - London, GB) [dblp]
- Cesar Ferri Ramirez (Technical University of Valencia, ES) [dblp]
- Bettina Finzel (Universität Bamberg, DE)
- Peter Flach (University of Bristol, GB) [dblp]
- Johannes Fürnkranz (Johannes Kepler Universität Linz, AT) [dblp]
- Manuel Garcia-Piqueras (University of Castilla-La Mancha, ES)
- José Hernández-Orallo (Technical University of Valencia, ES) [dblp]
- Céline Hocquette (Imperial College London, GB) [dblp]
- Gonzalo Jaimovitch López (Technical University of Valencia, ES)
- Frank Jäkel (TU Darmstadt, DE) [dblp]
- Susumu Katayama (University of Miyazaki, JP) [dblp]
- Tomáš Kliegr (University of Economics - Prague, CZ) [dblp]
- Stefan Kramer (Universität Mainz, DE) [dblp]
- Maithilee Kunda (Vanderbilt University, US)
- Sara Magliacane (University of Amsterdam, NL)
- Roman Manevich (Facebook - London, GB) [dblp]
- Fernando Martinez-Plumed (European Commission - Sevilla, ES) [dblp]
- Pasquale Minervini (University College London, GB)
- Stephen H. Muggleton (Imperial College London, GB) [dblp]
- Stassa Patsantzis (Imperial College London, GB)
- Johannes Rabold (Universität Bamberg, DE) [dblp]
- Claude Sammut (UNSW - Sydney, AU) [dblp]
- Stephan Scheele (Universität Bamberg, DE)
- Ute Schmid (Universität Bamberg, DE) [dblp]
- Javier Segovia-Aguas (UPF - Barcelona, ES)
- Gustavo Soares (Microsoft Corporation - Redmond, US) [dblp]
- Stefano Teso (University of Trento, IT) [dblp]
- Jan Tinapp (bidt - München, DE)
- Dagstuhl Seminar 13502: Approaches and Applications of Inductive Programming (2013-12-08 - 2013-12-11) (Details)
- Dagstuhl Seminar 15442: Approaches and Applications of Inductive Programming (2015-10-25 - 2015-10-30) (Details)
- Dagstuhl Seminar 17382: Approaches and Applications of Inductive Programming (2017-09-17 - 2017-09-20) (Details)
- Dagstuhl Seminar 19202: Approaches and Applications of Inductive Programming (2019-05-12 - 2019-05-17) (Details)
- Dagstuhl Seminar 23442: Approaches and Applications of Inductive Programming (2023-10-29 - 2023-11-03) (Details)
- Artificial Intelligence
- Human-Computer Interaction
- Machine Learning
- Interpretable Machine Learning
- Explainable Artificial Intelligence
- Interactive Learning
- Human-like Computing
- Inductive Logic Programming