Seminar Homepage : Druckversion


https://www.dagstuhl.de/17192

07. – 12. Mai 2017, Dagstuhl-Seminar 17192

Human-Like Neural-Symbolic Computing

Organisatoren

Tarek Richard Besold (Universität Bremen, DE)
Artur d'Avila Garcez (City, University of London, GB)
Ramanathan V. Guha (Los Altos Hills, US)
Luis C. Lamb (Federal University of Rio Grande do Sul, BR)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team

Dokumente

Dagstuhl Report, Volume 7, Issue 5 Dagstuhl Report
Motivationstext
Teilnehmerliste
Gemeinsame Dokumente
Dagstuhl's Impact: Dokumente verfügbar
Programm des Dagstuhl-Seminars [pdf]

Summary

The underlying idea of Human-Like Computing is to incorporate into Computer Science aspects of how humans learn, reason and compute. Recognising the relevance of the scientific trends in big data, data science methods and techniques have achieved industrial relevance in a number of areas, from retail to health, by obtaining insight from large data collections. Notably, neural networks have been successful and efficient at large-scale language modelling, speech recognition, image, video and sensor data analysis [3, 12, 15]. Human beings, on the other hand, are excellent at learning from very few data examples, capable of articulating explanations and resolving inconsistencies through reasoning and communication [7, 9, 16, 17].

Despite the recent impact of deep learning, limited progress has been made towards understanding the principles and mechanisms underlying language and vision understanding. Under this motivation, the seminar brought together not only computer scientists, but also specialists in artificial intelligence (AI), cognitive science, machine learning, knowledge representation and reasoning, computer vision, neural computation and natural language processing. In particular, the methodology of neural-symbolic computation [4, 7, 12], which can offer a principled interface between the relevant fields, especially symbolic AI and neural computation, was adopted in an attempt to offer a new perspective of reconciling large-scale modelling with human-level understanding, thus building a roadmap for principles and applications of Human-Like Neural-Symbolic Computing.

The techniques and methods of neural-symbolic computation have already been applied effectively to a number of areas, leading to developments in deep learning, data science and human-like computing [3]. For instance, neural-symbolic integration methods have been applied to temporal knowledge evolution in dynamic scenarios [6, 10, 14], action learning in video understanding [10], uncertainty learning and reasoning [1], argument learning in multiagent scenarios [7, 8], hardware and software verification and learning [2], ontology learning [13] and distributed temporal deep learning in general, with several applications in computer science [2, 10, 14].

Specifically, in this Dagstuhl Seminar we aimed at: (i) building better bridges between symbolic and sub-symbolic reasoning and learning, and between big data and human-like learning; (ii) comparative analyses and evaluations of the explanatory capacity of language modelling tools and techniques; (iii) design and applications of knowledge extraction methods and techniques towards life-long learning and transfer learning.

The seminar consisted of contributed and invited talks, breakout and joint group discussion sessions, and scientific hackathons. After each presentation or discussion session, open problems were identified and questions were raised. The area is clearly growing in importance, given recent advances in Artificial Intelligence and Machine Learning. In particular, the need for explainability in AI clearly poses relevant questions to learning methodologies, including deep learning. In summary, the main research directions identified by participants are:

  • Explainable AI: The recent success of deep learning in vision and language processing, associated with the growing complexity of big data applications has led to the need for explainable AI models. In neural-symbolic computing, rule extraction, interpretability, comprehensibility leading to the development of integrated systems, are one of the principled alternatives to lead these efforts [5, 7], as discussed in the Explainability hackathon. Furthermore, the concept of modularity in multimodal learning in deep networks is crucial to the development of the field and can help achieve knowledge extraction (as identified in [5, 6]) which can result in the development of effective knowledge extraction methods towards explainable AI, as discussed in the deep learning with symbols hackathon.
  • Hybrid Cognitive Architectures: The development of Cognitive Architectures capable of simulating and explaining aspects of human cognition also remains an important research endeavour. Some cognitive architectures typically consider symbolic representations, whereas others employ neural simulations. The integration of these models remains a challenge and there are benefits on integrating the accomplishments of both paradigms, as identified in the cognitive architectures hackathon.
  • Statistical Relational Learning: Logic Tensor Networks (LTNs) [11] provides a model that integrates symbolic knowledge (encoded as first-order logic relations) and subsymbolic knowledge (represented as feature vectors). The LTNs enable the representation of relational knowledge infusion into deep networks, and knowledge completion and distilling through querying the networks. There remains a number of challenges in integrating, explaining and computing symbolic knowledge in deep networks. Both LTNs [11] and Connectionist Modal and Temporal Logics [6, 7, 14] offer effective alternatives towards these research challenges, as explored in the LTN hackathon.

The seminar builds upon previous seminars and workshops on the integration of computational learning and symbolic reasoning, such as the Neural-Symbolic Learning and Reasoning (NeSy) workshop series, and the previous Dagstuhl Seminar 14381: Neural-Symbolic Learning and Reasoning [5].

References

  1. Tarek R. Besold, Artur S. d’Avila Garcez, Keith Stenning, Leendert W. N. van der Torre, Michiel van Lambalgen: Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples. Minds and Machines, 27(1): 37-77, 2017.
  2. Rafael V. Borges, Artur S. d’Avila Garcez, Luis C. Lamb. Learning and Representing Temporal Knowledge in Recurrent Networks. IEEE Trans. Neural Networks and Learning Systems 22(12):2409-2421, Dec. 2011. IEEE Press.
  3. A. d’Avila Garcez, Tarek R. Besold, Luc De Raedt, Peter Földiak, Pascal Hitzler, Thomas Icard, Kai-Uwe Kühnberger, Luis C. Lamb, Risto Miikkulainen, Daniel L. Silver. Neural-Symbolic Learning and Reasoning: Contributions and Challenges. Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Stanford Univ., March 2015, pp. 18-21, AAAI press, 2015.
  4. Artur S. d’Avila Garcez, Dov M. Gabbay, Krysia Broda. Neural-Symbolic Learning System: Foundations and Applications. Springer-Verlag, New York, Inc., USA, 2002.
  5. A. S. d’Avila Garcez, Marco Gori, Pascal Hitzler, Luís C. Lamb: Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381). Dagstuhl Reports 4(9): 50-84, 2015.
  6. Artur S. d’Avila Garcez and Luis C. Lamb. A Connectionist Computational Model for Epistemic and Temporal Reasoning. Neural Computation, 18(7):1711-1738, 2006.
  7. Artur S. d’Avila Garcez, Luis C. Lamb and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Springer Publishing Company, 2009. ISBN: 9783540732457.
  8. A. d’Avila Garcez, D.M. Gabbay and Luis C. Lamb: A neural cognitive model of argumentation with application to legal inference and decision making. Journal of Applied Logic, 12(2):109-127, 2014.
  9. M. de Kamps and F. van de Velde. From neural dynamics to true combinatorial structures. Behavioral and Brain Sciences, 20, pp. 88-108, 2006.
  10. H.L. de Penning, Artur S. d’Avila Garcez, Luis C. Lamb, John-Jules Ch. Meyer: A Neural-Symbolic Cognitive Agent for Online Learning and Reasoning. Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, IJCAI 2011. Pages 1653-1658, 2011.
  11. Ivan Donadello, Luciano Serafini, and Artur d’Avila Garcez. Logic Tensor Networks for Semantic Image Interpretation. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017.
  12. Barbara Hammer, Pascal Hitzler (Eds.): Perspectives of Neural-Symbolic Integration. Studies in Computational Intelligence 77, Springer 2007.
  13. P. Hitzler, S. Bader, and A. d’Avila Garcez. Ontology learning as a use case for neuralsymbolic integration. In Proc. Workshop on Neural-Symbolic Learning and Reasoning, NeSy’05 at IJCAI-05.
  14. Luis C. Lamb, R.V. Borges and A.S. d’Avila Garcez. A connectionist cognitive model for temporal synchronisation and learning. Proc. AAAI Conference on Artificial Intelligence AAAI-07, pages 827-832, AAAI Press, 2007.
  15. Y. LeCun, Y. Bengio, G. Hinton. Deep Learning. Nature, 521 (7553): 436-444, 2015.
  16. G.F. Marcus, S. Vijayan, S.B. Rao, and P.M. Vishton. Rule learning by seven-month-old infants. Science, 283(5398):77–80, 1999.
  17. K. Stenning and M. van Lambalgen. Human reasoning and Cognitive Science. MIT Press, Cambridge, MA, 2008.
  18. Alan M. Turing. Computing Machinery and Intelligence, Mind, LIX (236): 433–460, 1950.
License
  Creative Commons BY 3.0 Unported license
  Luis C. Lamb, Tarek R. Besold, and Artur d'Avila Garcez,

Related Dagstuhl-Seminar

Classification

Keywords



Buchausstellung

Bücher der Teilnehmer 

Buchausstellung im Erdgeschoss der Bibliothek

(nur in der Veranstaltungswoche).

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Seminar Homepage : Letzte Änderung 19.09.2018, 22:13 Uhr