Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 14381

Neural-Symbolic Learning and Reasoning

( Sep 14 – Sep 19, 2014 )

Please use the following short url to reference this page:



Dagstuhl Seminar Wiki

Shared Documents



The goal of this Dagstuhl seminar is to build bridges between symbolic and sub-symbolic reasoning and learning representations using computer vision as a catalyst application. This will require an ability to handle big data and lifelong learning in computer vision, and enable the systematic evaluation of neural-symbolic frameworks and systems. The seminar's main focus is the integration of methods and techniques from neural computation and machine learning, cognitive science and applied logic, and visual information processing. The expected outcomes include a neural-symbolic manifesto with an in-depth re-thinking of learning models capable of incorporating symbolic representations and the definition of a neural-symbolic big-data challenge and evaluation framework.

Computational systems are having to deal with huge data sets, e.g. billions of videos on the web. A major research challenge, therefore, is how computing will respond to the needs of society in such a world where data grows exponentially. Real-world data is noisy, often incomplete and high-dimensional. The computational systems of the 21st century will have to be robust, flexible, modular and expressive in order to face up to this challenge. The techniques already being applied to tackle the big data problem can be divided into symbolic (using e.g. spatial logic, inductive logic programming) and sub-symbolic (e.g. neural networks, statistical/graphical models), supervised (using labelled data), unsupervised (unlabelled data) and semi-supervised, shallow (e.g. support vector machines) and deep-learning techniques (e.g. deep belief networks). The deep network representation allows the sharing of common features and the selective composition of features that can bring about efficient computation and good generalisation. While results in computer vision indicate that the deep learning approach is promising, considerable research is needed to realise the goals of modularity, robustness and flexibility. In particular, a better understanding of the processes underlying concept formation (like e.g. the concept of a shape) is required to enable a more systematic network set-up, robust learning and validation (i.e. explanation), and the integration of deep networks with state-of-the-art systems that are symbolic. To this end, neural-symbolic systems seek to make use of the reasoning capacities of logic, and the learning capacities of network models. In a neural-symbolic system, neural networks provide the machinery for parallel computation and robust learning, while logic provides knowledge representation and reasoning, and explanation capability to the neural models, facilitating transfer learning and the interaction between the models and the world. In this integrated model, no conflict arises between a continuous and a discrete component of the system. Instead, a tightly-coupled hybrid system exists that is continuous by nature, but that has a high-level, discrete interpretation. Neural-symbolic systems have application in knowledge acquisition areas such as visual intelligence, where a system needs to learn to adapt to changes in the environment, and to reason about what has been learned in order to respond to a new situation.

Several challenges arise in this context, which can be categorized into the following three major topics for discussion: Symbolic knowledge representation, reasoning and learning by connectionist systems (includes comparisons with purely-symbolic and purely-connectionist models, representation of temporal, modal, commonsense, relational, first-order and higher-order reasoning, emergence, connectivity, abstraction, causality and analogy); Extraction of high-level concepts and knowledge from complex networks (deals with issues of efficient and effective knowledge extraction from very large networks, comprehensibility, explanation, validation, maintenance and transfer learning); Applications in vision, robotics, simulation and the web (includes learning and description of actions and other high-level concepts from large videos, sound and sensor data, noise robustness, gap-filling and anomaly detection in surveillance data, etc.). From a more practical perspective, complex systems engineering questions also arise. Most models in science rely on numerical models while the high-level concepts referred to above tend to be feature-based. How and to which degree should we integrate geometric approaches into representation languages and still preserve tractability as a major criterion for large scale reasoning and robust learning? How do we assist users in modelling and how can we semiautomatically transform this modelled knowledge for use in analogous domains? Humans categorize and reason based on analogies and similarities. Traditional (classical) logic-based reasoning is rigid in the sense that it either produces correct answers or no answers. Our goal of flexibility requires support for gradual correctness, and handling of incomplete or contradictory information. Approximate and non-classical reasoning is needed so that learning from big data can incorporate analogy, similarity and commonsense reasoning.

The emergence of symbolic representations is natural whenever one wants to tackle complex problems that are inherently associated with huge collections of data. The workshop will promote the idea of intelligent agents that interact with the environment by life-long learning. This is particularly relevant when machine learning meets computer vision. As part of the turn of neural-symbolic integration towards the practical, the seminar will also include the evaluation of algorithms and methods. This is another fundamental issue; one of the most serious drawbacks in our field is the lack of relevant and systematic evaluation mechanisms of the research. New ideas in this direction will be discussed, with challenges posed and demonstrations given and evaluated.

The seminar will mark the 10th anniversary of the workshop series on neural-symbolic learning and reasoning (NeSy). NeSy has been gathering members of the Artificial Intelligence, Neural Computation and Cognitive Science communities since 2005, but for only one day. At the last NeSy workshop it became clear that this was not enough, given that these communities share many common goals and aspirations, but are still largely disconnected in the organization, publication and sharing of research results and systems. The desire of many at NeSy to go deeper into the understanding of the main positions and issues, and to collaborate in a truly multidisciplinary way, using computer vision as a catalyst towards achieving specific objectives, has prompted us to put together this Dagstuhl seminar marking the 10th anniversary of the workshop. We hope you will be able to attend and contribute to the seminar.


Neural-symbolic computation aims at building rich computational models and systems through the integration of connectionist learning and sound symbolic reasoning [1,2]. Over the last three decades, neural networks were shown effective in the implementation of robust large-scale experimental learning applications. Logic-based, symbolic knowledge representation and reasoning have always been at the core of Artificial Intelligence (AI) research. More recently, the use of deep learning algorithms have led to notably efficient applications, with performance comparable to those of humans, in particular in computer image and vision understanding and natural language processing tasks [3,4,5]. Further, advances in fMRI allow scientists to grasp a better understanding of neural functions, leading to realistic neural-computational models. Therefore, the gathering of researchers from several communities seems fitting at this stage of the research in neural computation and machine learning, cognitive science, applied logic, and visual information processing. The seminar was an appropriate meeting for the discussion of relevant issues concerning the development of rich intelligent systems and models, which can, for instance integrate learning and reasoning or learning and vision. In addition to foundational methods, algorithms and methodologies for neural-symbolic integration, the seminar also showcase a number of applications of neural-symbolic computation.

The meeting also marked the 10th anniversary of the workshop series on neural-symbolic learning and reasoning (NeSy), held yearly since 2005 at IJCAI, AAAI or ECAI. The NeSy workshop typically took a day only at these major conferences, and it became then clear that given that the AI, cognitive science, machine learning, and applied logic communities share many common goals and aspirations it was necessary to provide an appropriately longer meeting, spanning over a week. The desire of many at NeSy to go deeper into the understanding of the main positions and issues, and to collaborate in a truly multidisciplinary way, using several applications (e.g. natural language processing, ontology reasoning, computer image and vision understanding, multimodal learning, knowledge representation and reasoning) towards achieving specific objectives, has prompted us to put together this Dagstuhl seminar marking the 10th anniversary of the workshop.

Further, neural-symbolic computation brings together an integrated methodological perspective, as it draws from both neuroscience and cognitive systems. In summary, neural-symbolic computation is a promising approach, both from a methodological and computational perspective to answer positively to the need for effective knowledge representation, reasoning and learning systems. The representational generality of neural-symbolic integration (the ability to represent, learn and reason about several symbolic systems) and its learning robustness provides interesting opportunities leading to adequate forms of knowledge representation, be they purely symbolic, or hybrid combinations involving probabilistic or numerical representations.

The seminar tackled diverse applications, in computer vision and image understanding, natural language processing, semantic web and big data. Novel approaches needed to tackle such problems, such as lifelong machine learning [6], connectionist applied logics [1,2], deep learning [4], relational learning [7] and cognitive computation techniques have also been extensively analyzed during the seminar. The abstracts, discussions and open problems listed below briefly summarize a week of intense scientific debate, which illustrate the profitable atmosphere provided by the Dagstuhl scenery. Finally, a forthcoming article describing relevant challenges and open problems will be published at the Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches at the AAAI Spring Symposium Series, to be held at Stanford in March 2015 [8]. This article also adds relevant content and a view of the area, illustrating its richness which may indeed lead to rich cognitive models integrating learning and reasoning effectively, as foreseen by Valiant [9].

Finally, we see neural-symbolic computation as a research area which reaches out to distinct communities: computer science, neuroscience, and cognitive science. By seeking to achieve the fusion of competing views it can benefit from interdisciplinary results. This contributes to novel ideas and collaboration, opening interesting research avenues which involve knowledge representation and reasoning, hybrid combinations of probabilistic and symbolic representations, and several topics in machine learning which can lead to both the construction of sound intelligent systems and to the understanding and modelling of cognitive and brain processes.


  1. Artur S. d'Avila Garcez, Luis C. Lamb, and Dov M. Gabbay, Neural-Symbolic Cognitive Reasoning. Cognitive Technologies, Springer, 2009.
  2. Barbara Hammer, Pascal Hitzler (Eds.): Perspectives of Neural-Symbolic Integration. Studies in Computational Intelligence 77, Springer 2007.
  3. D.C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.
  4. G.E. Hinton, S. Osindero, and Y. Teh, A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554, 2006.
  5. Abdel-rahman Mohamed, George Dahl & Geoffrey Hinton. Acoustic Modeling Using Deep Belief Networks. IEEE Transactions on Audio, Speech, and Language Processing. 20(1):14-22, 2012.
  6. D. Silver, Q. Yang, and L. Li, Lifelong machine learning systems: Beyond learning algorithms. Proceedings of the AAAI Spring Symposium on Lifelong Machine Learning, Stanford University, AAAI, March, 2013, pp. 4-55.
  7. Stephen Muggleton, Luc De Raedt, David Poole, Ivan Bratko, Peter A. Flach, Katsumi Inoue, Ashwin Srinivasan: ILP turns 20 - Biography and future challenges. Machine Learning, 86(1):3-23, 2012.
  8. Artur d'Avila Garcez, Tarek R. Besold, Luc de Raedt, Peter Foeldiak, Pascal Hitzler, Thomas Icard, Kai-Uwe Kuehnberger, Luis C. Lamb, Risto Miikkulainen, Daniel L. Silver. Neural-Symbolic Learning and Reasoning: Contributions and Challenges. Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Stanford, March 2015.
  9. L.G. Valiant, Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence. FSTTCS, pp. 415-422, 2008.
Copyright Artur d'Avila Garcez, Marco Gori, Pascal Hitzler, and Luis C. Lamb

  • Tsvi Achler (IBM Almaden Center, US) [dblp]
  • Jim Benvenuto (HRL Labs - Malibu, US)
  • Tarek Richard Besold (Universität Osnabrück, DE) [dblp]
  • Srikanth Cherla (City University - London, GB) [dblp]
  • Claudia d'Amato (University of Bari, IT) [dblp]
  • Artur d'Avila Garcez (City University - London, GB) [dblp]
  • James Christopher Davidson (Google Inc. - Mountain View, US) [dblp]
  • Leo de Penning (TNO Behaviour and Societal Sciences - Soesterberg, NL) [dblp]
  • Luc De Raedt (KU Leuven, BE) [dblp]
  • Natalia Diaz Rodriguez (Turku Centre for Computer Science, FI) [dblp]
  • Dominik Endres (Universität Marburg, DE) [dblp]
  • Jacqueline Fairley (Emory University - Atlanta, US) [dblp]
  • Jerry A. Feldman (ICSI - Berkeley, US) [dblp]
  • Peter Földiak (University of St Andrews, GB) [dblp]
  • Manoel Franca (City University - London, GB) [dblp]
  • Christophe D. M. Gueret (DANS - Den Hague, NL) [dblp]
  • Biao Han (NUDT - Hunan, CN) [dblp]
  • Pascal Hitzler (Wright State University - Dayton, US) [dblp]
  • Steffen Hölldobler (TU Dresden, DE) [dblp]
  • Thomas Icard (Stanford University, US) [dblp]
  • Randal A. Koene (Carboncopies - San Francisco, US) [dblp]
  • Kai-Uwe Kühnberger (Universität Osnabrück, DE) [dblp]
  • Luis C. Lamb (Federal University of Rio Grande do Sul, BR) [dblp]
  • Francesca Alessandra Lisi (University of Bari, IT) [dblp]
  • Dragos Margineantu (Boeing Research & Technology - Seattle, US) [dblp]
  • Vivien Mast (Universität Bremen, DE) [dblp]
  • Risto Miikkulainen (University of Texas - Austin, US) [dblp]
  • Andrey Mokhov (Newcastle University, GB) [dblp]
  • Bernd Neumann (Universität Hamburg, DE) [dblp]
  • Günther Palm (Universität Ulm, DE) [dblp]
  • Alan Perotti (University of Turin, IT) [dblp]
  • Gadi Pinkas (Or Yehudah, IL & Bar Ilan University, IL) [dblp]
  • Subramanian Ramamoorthy (University of Edinburgh, GB) [dblp]
  • Luciano Serafini (Bruno Kessler Foundation - Trento, IT) [dblp]
  • Daniel L. Silver (Acadia University - Wolfville, CA) [dblp]
  • Son Tran (City University - London, GB) [dblp]
  • Joshua Welch (University of North Carolina at Chapel Hill, US) [dblp]
  • Mark Wernsdorfer (Universität Bamberg, DE) [dblp]
  • Thomas Wischgoll (Wright State University - Dayton, US) [dblp]

Related Seminars
  • Dagstuhl Seminar 17192: Human-Like Neural-Symbolic Computing (2017-05-07 - 2017-05-12) (Details)

  • artificial intelligence / robotics
  • computer graphics / computer vision

  • Cognitive agents
  • cognitive robotics
  • visual intelligence
  • multimodal learning
  • emergence
  • symbol grounding
  • complex networks
  • practical reasoning
  • commonsense reasoning
  • action description.