TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 19202

Approaches and Applications of Inductive Programming

( May 12 – May 17, 2019 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/19202

Organizers

Contact



Schedule

Motivation

Inductive programming addresses the problem of learning programs from incomplete specifications - typically from input/output examples. Beginning in the 1960s, this area of research was initiated in artificial intelligence exploring the complex cognitive processes involved in producing program code which satisfies some specification. Inductive programming can be seen as a subdomain of machine learning where the hypothesis space consists of classes of computer programs. Researchers working on this topic have their background in diverse areas of computer science, namely in machine learning, artificial intelligence, declarative programming, program verification, and software engineering. Furthermore, inductive programming is of interest for researchers in cognitive science, working on computational models of inductive learning from experience, and for researchers in education, especially in intelligent tutoring. A break-through from basic research to applications for the mass-market was achieved by applying inductive programming techniques to programming by examples support of end-users for Microsoft Excel (Flashfill). This Dagstuhl Seminar is planned as a continuation of Dagstuhl Seminars 13502, 15442, and 17382. In the first seminar, focus was on exploring the different areas of basic research and applications of inductive programming. In the second seminar, in-depth coverage of algorithmic methods was provided and the relation to cognitive modeling was explored. In the third seminar focus was on different application areas such as data cleansing, teaching programming, and interactive training.

In the fourth seminar, focus will be on the potential of inductive programming

  • as an approach to explainable AI,
  • for support in data science,
  • in relation to neural computation, especially deep networks, and
  • as a special style of human-like computing.
Copyright Luc De Raedt, Richard Evans, Stephen H. Muggleton, and Ute Schmid

Summary

Inductive programming addresses the automated or semi-automated generation of computer programs from incomplete information such as input-output examples, constraints, computation traces, demonstrations, or problem-solving experience [11]. The generated – typically declarative – program has the status of a hypothesis which has been generalized by induction. That is, inductive programming can be seen as a special approach to machine learning. In contrast to standard machine learning, only a small number of training examples is necessary. Furthermore, learned hypotheses are represented as logic or functional programs, that is, they are represented on symbol level and therefore are inspectable and comprehensible [36, 15, 37, 29]. On the other hand, inductive programming is a special approach to program synthesis. It complements deductive and transformational approaches [39, 25, 4]. In cases where synthesis of specific algorithm details that are hard to figure out by humans inductive reasoning can be used to generate program candidates from either user-provided data such as test cases or from data automatically derived from a formal specification [35]. Finally, symbolic approaches can be combined with probabilistic methods [8, 9].

Inductive program synthesis is of interest for researchers in artificial intelligence since the late sixties [2]. On the one hand, the complex intellectual cognitive processes involved in producing program code which satisfies some specification are investigated, on the other hand methodologies and techniques for automating parts of the program development process are explored. One of the most relevant areas of application of inductive programming techniques is end-user programming [5, 22, 6]. For example, the Microsoft Excel plug-in Flashfill synthesizes programs from a small set of observations of user behavior [15, 14, 13]. Related applications are in process mining and in data wrangling [19, 21]. Inductive programming in general offers powerful approaches to learning from relational data [30, 23] and to learning from observations in the context of autonomous intelligent agents [28, 20, 36]. Furthermore, inductive programming can be applied in the context of teaching programming [38, 41]. A recent new domain of interest is how to combine inductive programming with blackbox approaches, especially in the context of (deep) neural networks [10] and in data science.

Relation to Previous Dagstuhl-Seminars

The seminar is a continuation Dagstuhl-Seminars 13502, 15442, and 17382. In the first seminar, the focus was on establishing the research community by exploring the different areas of basic research and applications of inductive programming and identifying commonalities and differences in methods and goals. In the second seminar, more in-depth coverage of algorithmic methods was provided and the relation of inductive programming to cognitive modeling was explored. The third seminar had a main focus on applications in data cleansing, teaching programming, and interactive training. Furthermore, first proposals for neural approaches to learning for inductive programming were presented.

Besides many new insights from many discussions, visible outcomes from the previous seminars are:

  • Muggleton, S.H., Schmid, U., Zeller, C., Tamaddoni-Nezhad, A. and T. Besold (2019). Ultra-strong machine learning – comprehensibility of programs learned with ILP. Machine Learning, 107(7), 1119–1140.
  • Schmid, U., Zeller, C., Besold, T., Tamaddoni-Nezhad, A., & Muggleton, S.H. (2017). How does predicate invention affect human comprehensibility?. In Alessandra Russo and James Cussens, editors, Proceedings of the 26th International Conference on Inductive Logic Programming (ILP 2016), pp. 52-67, Springer.
  • Hernández-Orallo, J., Martínez-Plumed, F., Schmid, U., Siebers, M., & Dowe, D. L. (2016). Computer models solving intelligence test problems: Progress and implications. Artificial Intelligence, 230, 74-107.
  • Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11), 90-99.
  • https://en.wikipedia.org/wiki/Inductive_programming
  • NIPS’2016 workshop on Neural Nets and Program Induction involving Stephen Muggleton.
  • A collaboration in the context of the EPSRC funded Human-Like Computing Network+ headed by Muggleton (see http://hlc.doc.ic.ac.uk/).

For the fourth seminar, we extend our invitation to researchers from deep learning and to researchers addressing statistical machine learning and probabilistic reasoning. The focus of the fourth seminar has been on the potential of inductive programming for explainable AI, especially in combination with (deep) neural networks and with data science.

Inductive Programming as a Approach to Explainable AI

Recently it has been recognized that explainability is crucial for machine learning to be usable in real world domains – especially such where erroneous decisions might be harmful to humans. Consequently, interfaces which explain (aspects) of classifier decisions have been proposed – especially in the context of deep learning [32, 3]. The notion of explainability has already been addressed in the early days of machine learning: During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as that of Mitchell [27], tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches.

Inductive Programming and Neural Computation

The deep learning community has been interested in taking on challenging tasks from the artificial intelligence community. It is therefore no surprise that they have also started to look into inductive and automatic programming. In particular, they have contributed several mixtures of traditional computational models with those of neural networks. For instance, the neural Turing machine [12] integrates a neural network with an external memory and it is able to learn simple algorithms such as copy and sort, the neural program interpreter [31] is a recurrent neural network that learns to represent and execute programs from program traces, while [33] present an end-to-end differentiable interpreter for the programming language Forth and [34, 24] for a declarative Prolog like language. The central goal in these approaches is to obtain an end-to-end differentiable model. While initial results are promising, the approaches still require a lot of data to be trained or need to scale up. This contrasts with the more traditional symbolic approaches to inductive programming and thus a key opportunity is to further cross-fertalize these two approaches.

Inductive Programming and Human-like Computing

The human ability to master complex demands is to a large extend based on the ability to exploit previous experiences. Based on our experience, we are able to predict characteristics or reactions of (natural or man-made, inanimate or animate) objects, we can reason about possible outcomes of actions, and we can apply previously successful routines and strategies to new tasks and problems. In philosophy, psychology and artificial intelligence, researchers proposed that the core process to expand knowledge, that is, construct hypotheses, in such a way that we can transfer knowledge from previous experience to new situations is inductive inference [18, 16, 40].

One special aspect of induction is that humans are able to acquire complex, productive rule sets from experience. Following Chomsky, rules are productive when they can be applied in situations of various complexity. Typical examples of such rule sets are knowledge about natural language grammar, recursive concepts such as ancestor and recursive problem solving strategies. For example, if humans have learned how to solve Tower of Hanoi problems with three and four discs, at least some of them are able to generalize the underlying strategy for solving problems with an arbitrary number of discs.

Inductive programming provides mechanisms to generate such productive, recursive rule sets. Examples of recent work on using inductive programming to model this learningcapability of human cognition are [26, 20, 36, 17, 23]. Therefore, it might be fruitful for cognitive scientists to get acquainted with inductive programming as one approach to model the acquisition of complex knowledge structures. On the other hand, knowledge gained from experiments in human problem solving, concept learning and language acquisition can be a source of inspiration for new algorithmic approaches to inductive programming.

Objectives and Expected Outcomes of the Seminar

The previous seminars resulted in new collaborations between researchers from different backgrounds as documented in joint publications and we expect that the collaborations will continue, deepen and extend, resulting not only in further joint publications but also in joint research projects.

In the fourth seminar, we continued and extended previous discussions addressing the following aspects:

  • Identifying the specific contributions of inductive programming to machine learning research and applications of machine learning, especially identifying problems for which inductive programming approaches are more suited than standard machine learning approaches, including deep learning and probabilistic programming. Focus here is on possibilities of combining (deep) neural approaches or probabilistic programming with (symbolic) inductive programming, especially with respect to new approaches to comprehensibility of machine learned models and on explainable AI.
  • Establishing criteria for evaluating inductive programming approaches in comparison to each other and in comparison to other approaches of machine learning and providing a set of benchmark problems.
  • Discussing current applications of inductive programming in end-user programming and programming education and identifying further relevant areas of application.
  • Establishing stronger relations between cognitive science research on inductive learning and inductive programming under the label of human-like computation.
  • Strengthening the relation of inductive programming and data science, especially with respect to data cleansing and data wrangling.

Concluding Remarks and Future Plans

In the wrapping-up section, we decided to move the IP webpage to a Wiki and encouraged all participants to make available their systems, tutorial/lecture slides and publications there. As the grand IP challenge we came up with 2017 is still up:

An IP program should invent an algorithm publishable in a serious journal (e.g., an integer factorization algorithm) or win a programming competition!

References

  1. Daniel W. Barowy, Sumit Gulwani, Ted Hart, and Benjamin Zorn. Flashrelate: Extracting relational data from semi-structured spreadsheets using examples. SIGPLAN Not., 50(6):218–228, June 2015.
  2. A. W. Biermann, G. Guiho, and Y. Kodratoff, editors. Automatic Program Construction Techniques. Macmillan, New York, 1984.
  3. Alexander Binder, Sebastian Bach, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. Layer-wise relevance propagation for deep neural network architectures. In Information Science and Applications (ICISA) 2016, pages 913–922. Springer, 2016.
  4. Rastislav Bodik and Emina Torlak. Synthesizing programs with constraint solvers. In CAV, page 3, 2012.
  5. A. Cypher, editor. Watch What I Do: Programming by Demonstration. MIT Press, Cambridge, MA, 1993.
  6. Allen Cypher, Mira Dontcheva, Tessa Lau, and Jeffrey Nichols, editors. No Code Required: Giving Users Tools to Transform the Web. Elsevier, 2010.
  7. Luc De Raedt, Hendrik Blockeel, Samuel Kolb, Stefano Teso, and Gust Verbruggen. Elements of an automatic data scientist. In International Symposium on Intelligent Data Analysis, pages 3–14. Springer, 2018.
  8. Luc De Raedt and Angelika Kimmig. Probabilistic (logic) programming concepts. Machine Learning, 100(1):5–47, 2015.
  9. Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61:1–64, 2018.
  10. Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. Can neural networks understand logical entailment? arXiv preprint arXiv:1802.08535, 2018.
  11. P. Flener and U. Schmid. Inductive programming. In C. Sammut and G. Webb, editors, Encyclopedia of Machine Learning, pages 537–544. Springer, 2010.
  12. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
  13. Sumit Gulwani. Automating string processing in spreadsheets using input-output examples. In 38th Symposium on Principles of Programming Languages. ACM, 2011.
  14. Sumit Gulwani, William R. Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Communications of the ACM, 55(8):97–105, 2012.
  15. Sumit Gulwani, José Hernández-Orallo, Emanuel Kitzelmann, Stephen H. Muggleton, Ute Schmid, and Benjamin G. Zorn. Inductive programming meets the real world. Commununications of the ACM, 58(11):90–99, 2015.
  16. Ulrike Hahn, Todd M Bailey, and Lucy BC Elvin. Effects of category diversity on learning, memory, and generalization. Memory & Cognition, 33(2):289–302, 2005.
  17. J. Hernández-Orallo, D. L. Dowe, and M. V. Hernández-Lloreda. Universal psychometrics: Measuring cognitive abilities in the machine kingdom. Cognitive Systems Research, 27(0):50–74, 2014.
  18. J.H. Holland, K.J. Holyoak, R.E. Nisbett, and P.R. Thagard. Induction – Processes of Inference, Learning, and Discovery. MIT Press, Cambridge, MA, 1986.
  19. Samuel Kolb, Sergey Paramonov, Tias Guns, and Luc De Raedt. Learning constraints in spreadsheets and tabular data. Machine Learning, 106(9-10):1441–1468, 2017.
  20. P. Langley and D. Choi. A unified cognitive architecture for physical agents. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, Boston, MA, 2006. AAAI Press.
  21. Vu Le and Sumit Gulwani. Flashextract: A framework for data extraction by examples. ACM SIGPLAN Notices, 49(6):542–553, 2014.
  22. Henry Lieberman, editor. Your Wish is My Command: Programming by Example. Morgan Kaufmann, San Francisco, 2001.
  23. D. Lin, E. Dechter, K. Ellis, J.B. Tenenbaum, and S.H. Muggleton. Bias reformulation for one-shot function induction. In Proceedings of the 23rd European Conference on Artificial Intelligence (ECAI 2014), pages 525–530, Amsterdam, 2014. IOS Press.
  24. Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3749–3759. Curran Associates, Inc., 2018.
  25. Zohar Manna and Richard Waldinger. A deductive approach to program synthesis. ACM Transactions on Programming Languages and Systems, 2(1):90–121, 1980.
  26. G. F. Marcus. The Algebraic Mind. Integrating Conncetionism and Cognitive Science. Bradford, Cambridge, MA, 2001.
  27. Tom Mitchell. Machine learning. McGraw Hill, 1997.
  28. Bogdan Moldovan, Plinio Moreno, Davide Nitti, José Santos-Victor, and Luc De Raedt. Relational affordances for multiple-object manipulation. Auton. Robots, 42(1):19–44, 2018.
  29. S.H. Muggleton, U. Schmid, C. Zeller, A. Tamaddoni-Nezhad, and T. Besold. Ultra-strong machine learning – comprehensibility of programs learned with ILP. Machine Learning, 2018.
  30. Stephen H. Muggleton, Dianhuan Lin, and Alireza Tamaddoni-Nezhad. Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Machine Learning, 100(1):49–73, 2015.
  31. Scott E. Reed and Nando de Freitas. Neural programmer-interpreters. CoRR, abs/1511.06279, 2015.
  32. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM, 2016.
  33. Sebastian Riedel, Matko Bosnjak, and Tim Rocktäschel. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640, 2016.
  34. Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. CoRR, abs/1705.11040, 2017.
  35. Reudismam Rolim, Gustavo Soares, Loris D’Antoni, Oleksandr Polozov, Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, and Bjoern Hartmann. Learning syntactic program transformations from examples. arXiv preprint arXiv:1608.09000, 2016.
  36. Ute Schmid and Emanuel Kitzelmann. Inductive rule learning on the knowledge level. Cognitive Systems Research, 12(3):237–248, 2011.
  37. Ute Schmid, Christina Zeller, Tarek Besold, Alireza Tamaddoni-Nezhad, and Stephen Muggleton. How does predicate invention affect human comprehensibility? In International Conference on Inductive Logic Programming, pages 52–67. Springer, 2016.
  38. Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback generation for introductory programming assignments. ACM SIGPLAN Notices, 48(6):15–26, 2013.
  39. Douglas R. Smith. The synthesis of LISP programs from examples: A survey. In Automatic Program Construction Techniques, pages 307–324. Macmillan, 1984.
  40. J. Tenenbaum, T.L. Griffiths, and C. Kemp. Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences, 10(7):309–318, 2006.
  41. Christina Zeller and Ute Schmid. Automatic generation of analogous problems to help resolving misconceptions in an intelligent tutor system for written subtraction. In Proceedings of the Workshop on Computational Analogy at the 24th International Conference on Case Based Reasoning (ICCBR 2016, Atlanta, GA, 31th October to 2nd November 2016), 2016.
Copyright Ute Schmid and Luc De Raedt

Participants
  • Eli Bingham (Uber AI Labs - San Francisco, US) [dblp]
  • Ivan Bratko (University of Ljubljana, SI) [dblp]
  • Maurice Chandoo (Leibniz Universität Hannover, DE) [dblp]
  • Lidia Contreras-Ochando (Technical University of Valencia, ES) [dblp]
  • Andrew Cropper (University of Oxford, GB) [dblp]
  • Luc De Raedt (KU Leuven, BE) [dblp]
  • Amit Dhurandhar (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
  • Kevin Ellis (MIT - Cambridge, US) [dblp]
  • Richard Evans (Google DeepMind - London, GB) [dblp]
  • Cesar Ferri Ramirez (Technical University of Valencia, ES) [dblp]
  • Johannes Fürnkranz (TU Darmstadt, DE) [dblp]
  • Elena Leah Glassman (Harvard University - Cambridge, US) [dblp]
  • José Hernández-Orallo (Technical University of Valencia, ES) [dblp]
  • Céline Hocquette (Imperial College London, GB) [dblp]
  • Tomáš Kliegr (University of Economics - Prague, CZ) [dblp]
  • Mark Law (Imperial College London, GB) [dblp]
  • Eneldo Loza Mencía (TU Darmstadt, DE) [dblp]
  • Stephen H. Muggleton (Imperial College London, GB) [dblp]
  • Stassa Patsantzis (Imperial College London, GB) [dblp]
  • Ruzica Piskac (Yale University - New Haven, US) [dblp]
  • Johannes Rabold (Universität Bamberg, DE) [dblp]
  • Harald Ruess (fortiss GmbH - München, DE) [dblp]
  • Ute Schmid (Universität Bamberg, DE) [dblp]
  • Michael Siebers (Universität Bamberg, DE) [dblp]
  • Armando Solar-Lezama (MIT - Cambridge, US) [dblp]
  • Gust Verbruggen (KU Leuven, BE) [dblp]

Related Seminars
  • Dagstuhl Seminar 13502: Approaches and Applications of Inductive Programming (2013-12-08 - 2013-12-11) (Details)
  • Dagstuhl Seminar 15442: Approaches and Applications of Inductive Programming (2015-10-25 - 2015-10-30) (Details)
  • Dagstuhl Seminar 17382: Approaches and Applications of Inductive Programming (2017-09-17 - 2017-09-20) (Details)
  • Dagstuhl Seminar 21192: Approaches and Applications of Inductive Programming (2021-05-09 - 2021-05-12) (Details)
  • Dagstuhl Seminar 23442: Approaches and Applications of Inductive Programming (2023-10-29 - 2023-11-03) (Details)
  • Dagstuhl Seminar 25491: Approaches and Applications of Inductive Programming (2025-11-30 - 2025-12-05) (Details)

Classification
  • artificial intelligence / robotics
  • programming languages / compiler
  • society / human-computer interaction

Keywords
  • inductive logic programming
  • enduser programming
  • probabilistic programming
  • human-like computing
  • explainable AI