TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 17192

Human-Like Neural-Symbolic Computing

( 07. May – 12. May, 2017 )


Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/17192

Organisatoren

Kontakt



Programm

Motivation

Human-like computing is a new area of research that seeks to incorporate into computer science aspects of how humans learn, reason and compute, whilst recognising the importance of the recent trends in big data. Data science methods and techniques have achieved industrial relevance in a number of areas, from retail to health, by obtaining insight from large data collections. Notably, neural networks have been successful and efficient at large-scale language modelling, speech recognition, image, video and sensor data analysis. Human-beings on the other hand are excellent at learning from very few data examples, capable of articulating explanations and resolving inconsistencies through communication.

We argue that data science requires an ability to explain its insights. As a case in point, despite the success of neural networks at performing language modelling, very little progress has been made at understanding the principles and mechanisms underlying language processing. Techniques of knowledge extraction ought to be investigated and applied in the context of data science success stories to enable a systematic study and the data-driven formulation of sound theories, capable of explaining the insights and moving the research forward. In addition to knowledge extraction models and algorithms, various aspects of human-machine and human-agent interaction and communication will be fundamental to the process of achieving human-level understanding and better scientific theories, as well as systems capable of reasoning about what has been learned.

In this Dagstuhl Seminar, we will bring together world-leading researchers and rising stars in the areas of neural computation, language modelling, artificial intelligence (AI), knowledge extraction, computational logic, machine learning, neural-symbolic computing, cognitive psychology, cognitive science and human-computer interaction to discuss, investigate and formulate the requirements and fundamental applications of human-like computing.

Language modelling tasks will be considered as a first case study, and the methodology of neural-symbolic computation will serve to underpin the discussions on knowledge extraction, representation and learning, and as baseline for comparison with alternative approaches, such as statistical relational AI.

Specifically, this Dagstuhl Seminar seeks to produce (i) better bridges between symbolic and sub-symbolic reasoning and learning, and between big data and human-like learning; (ii) comparative analyses and evaluations of the explanatory capacity of language modelling tools and techniques; (iii) designs and applications of knowledge extraction methods and techniques towards life-long learning and transfer learning between areas of application.

Expected outcomes include a roadmap towards human-like computing, a manifesto in a journal special volume, and the definition of a human-like neural-symbolic challenge and evaluation framework.

Copyright Tarek R. Besold, Artur d'Avila Garcez, Ramanathan V. Guha, and Luis C. Lamb

Summary

The underlying idea of Human-Like Computing is to incorporate into Computer Science aspects of how humans learn, reason and compute. Recognising the relevance of the scientific trends in big data, data science methods and techniques have achieved industrial relevance in a number of areas, from retail to health, by obtaining insight from large data collections. Notably, neural networks have been successful and efficient at large-scale language modelling, speech recognition, image, video and sensor data analysis [3, 12, 15]. Human beings, on the other hand, are excellent at learning from very few data examples, capable of articulating explanations and resolving inconsistencies through reasoning and communication [7, 9, 16, 17].

Despite the recent impact of deep learning, limited progress has been made towards understanding the principles and mechanisms underlying language and vision understanding. Under this motivation, the seminar brought together not only computer scientists, but also specialists in artificial intelligence (AI), cognitive science, machine learning, knowledge representation and reasoning, computer vision, neural computation and natural language processing. In particular, the methodology of neural-symbolic computation [4, 7, 12], which can offer a principled interface between the relevant fields, especially symbolic AI and neural computation, was adopted in an attempt to offer a new perspective of reconciling large-scale modelling with human-level understanding, thus building a roadmap for principles and applications of Human-Like Neural-Symbolic Computing.

The techniques and methods of neural-symbolic computation have already been applied effectively to a number of areas, leading to developments in deep learning, data science and human-like computing [3]. For instance, neural-symbolic integration methods have been applied to temporal knowledge evolution in dynamic scenarios [6, 10, 14], action learning in video understanding [10], uncertainty learning and reasoning [1], argument learning in multiagent scenarios [7, 8], hardware and software verification and learning [2], ontology learning [13] and distributed temporal deep learning in general, with several applications in computer science [2, 10, 14].

Specifically, in this Dagstuhl Seminar we aimed at: (i) building better bridges between symbolic and sub-symbolic reasoning and learning, and between big data and human-like learning; (ii) comparative analyses and evaluations of the explanatory capacity of language modelling tools and techniques; (iii) design and applications of knowledge extraction methods and techniques towards life-long learning and transfer learning.

The seminar consisted of contributed and invited talks, breakout and joint group discussion sessions, and scientific hackathons. After each presentation or discussion session, open problems were identified and questions were raised. The area is clearly growing in importance, given recent advances in Artificial Intelligence and Machine Learning. In particular, the need for explainability in AI clearly poses relevant questions to learning methodologies, including deep learning. In summary, the main research directions identified by participants are:

  • Explainable AI: The recent success of deep learning in vision and language processing, associated with the growing complexity of big data applications has led to the need for explainable AI models. In neural-symbolic computing, rule extraction, interpretability, comprehensibility leading to the development of integrated systems, are one of the principled alternatives to lead these efforts [5, 7], as discussed in the Explainability hackathon. Furthermore, the concept of modularity in multimodal learning in deep networks is crucial to the development of the field and can help achieve knowledge extraction (as identified in [5, 6]) which can result in the development of effective knowledge extraction methods towards explainable AI, as discussed in the deep learning with symbols hackathon.
  • Hybrid Cognitive Architectures: The development of Cognitive Architectures capable of simulating and explaining aspects of human cognition also remains an important research endeavour. Some cognitive architectures typically consider symbolic representations, whereas others employ neural simulations. The integration of these models remains a challenge and there are benefits on integrating the accomplishments of both paradigms, as identified in the cognitive architectures hackathon.
  • Statistical Relational Learning: Logic Tensor Networks (LTNs) [11] provides a model that integrates symbolic knowledge (encoded as first-order logic relations) and subsymbolic knowledge (represented as feature vectors). The LTNs enable the representation of relational knowledge infusion into deep networks, and knowledge completion and distilling through querying the networks. There remains a number of challenges in integrating, explaining and computing symbolic knowledge in deep networks. Both LTNs [11] and Connectionist Modal and Temporal Logics [6, 7, 14] offer effective alternatives towards these research challenges, as explored in the LTN hackathon.

The seminar builds upon previous seminars and workshops on the integration of computational learning and symbolic reasoning, such as the Neural-Symbolic Learning and Reasoning (NeSy) workshop series, and the previous Dagstuhl Seminar 14381: Neural-Symbolic Learning and Reasoning [5].

References

  1. Tarek R. Besold, Artur S. d’Avila Garcez, Keith Stenning, Leendert W. N. van der Torre, Michiel van Lambalgen: Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples. Minds and Machines, 27(1): 37-77, 2017.
  2. Rafael V. Borges, Artur S. d’Avila Garcez, Luis C. Lamb. Learning and Representing Temporal Knowledge in Recurrent Networks. IEEE Trans. Neural Networks and Learning Systems 22(12):2409-2421, Dec. 2011. IEEE Press.
  3. A. d’Avila Garcez, Tarek R. Besold, Luc De Raedt, Peter Földiak, Pascal Hitzler, Thomas Icard, Kai-Uwe Kühnberger, Luis C. Lamb, Risto Miikkulainen, Daniel L. Silver. Neural-Symbolic Learning and Reasoning: Contributions and Challenges. Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Stanford Univ., March 2015, pp. 18-21, AAAI press, 2015.
  4. Artur S. d’Avila Garcez, Dov M. Gabbay, Krysia Broda. Neural-Symbolic Learning System: Foundations and Applications. Springer-Verlag, New York, Inc., USA, 2002.
  5. A. S. d’Avila Garcez, Marco Gori, Pascal Hitzler, Luís C. Lamb: Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381). Dagstuhl Reports 4(9): 50-84, 2015.
  6. Artur S. d’Avila Garcez and Luis C. Lamb. A Connectionist Computational Model for Epistemic and Temporal Reasoning. Neural Computation, 18(7):1711-1738, 2006.
  7. Artur S. d’Avila Garcez, Luis C. Lamb and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Springer Publishing Company, 2009. ISBN: 9783540732457.
  8. A. d’Avila Garcez, D.M. Gabbay and Luis C. Lamb: A neural cognitive model of argumentation with application to legal inference and decision making. Journal of Applied Logic, 12(2):109-127, 2014.
  9. M. de Kamps and F. van de Velde. From neural dynamics to true combinatorial structures. Behavioral and Brain Sciences, 20, pp. 88-108, 2006.
  10. H.L. de Penning, Artur S. d’Avila Garcez, Luis C. Lamb, John-Jules Ch. Meyer: A Neural-Symbolic Cognitive Agent for Online Learning and Reasoning. Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, IJCAI 2011. Pages 1653-1658, 2011.
  11. Ivan Donadello, Luciano Serafini, and Artur d’Avila Garcez. Logic Tensor Networks for Semantic Image Interpretation. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017.
  12. Barbara Hammer, Pascal Hitzler (Eds.): Perspectives of Neural-Symbolic Integration. Studies in Computational Intelligence 77, Springer 2007.
  13. P. Hitzler, S. Bader, and A. d’Avila Garcez. Ontology learning as a use case for neuralsymbolic integration. In Proc. Workshop on Neural-Symbolic Learning and Reasoning, NeSy’05 at IJCAI-05.
  14. Luis C. Lamb, R.V. Borges and A.S. d’Avila Garcez. A connectionist cognitive model for temporal synchronisation and learning. Proc. AAAI Conference on Artificial Intelligence AAAI-07, pages 827-832, AAAI Press, 2007.
  15. Y. LeCun, Y. Bengio, G. Hinton. Deep Learning. Nature, 521 (7553): 436-444, 2015.
  16. G.F. Marcus, S. Vijayan, S.B. Rao, and P.M. Vishton. Rule learning by seven-month-old infants. Science, 283(5398):77–80, 1999.
  17. K. Stenning and M. van Lambalgen. Human reasoning and Cognitive Science. MIT Press, Cambridge, MA, 2008.
  18. Alan M. Turing. Computing Machinery and Intelligence, Mind, LIX (236): 433–460, 1950.
Copyright Luis C. Lamb, Tarek R. Besold, and Artur d'Avila Garcez,

Teilnehmer
  • Lucas Bechberger (Universität Osnabrück, DE) [dblp]
  • Tarek Richard Besold (Universität Bremen, DE) [dblp]
  • Jelmer Borst (University of Groningen, NL)
  • Artur d'Avila Garcez (City, University of London, GB) [dblp]
  • James Christopher Davidson (Google Inc. - Mountain View, US) [dblp]
  • Marc de Kamps (University of Leeds, GB) [dblp]
  • Derek Doran (Wright State University - Dayton, US) [dblp]
  • Ulrich Furbach (Universität Koblenz-Landau, DE) [dblp]
  • Raquel Garrido Alhama (University of Amsterdam, NL)
  • Marco Gori (University of Siena, IT) [dblp]
  • Pascal Hitzler (Wright State University - Dayton, US) [dblp]
  • Dieuwke Hupkes (University of Amsterdam, NL) [dblp]
  • Caroline Jay (University of Manchester, GB) [dblp]
  • Kristian Kersting (TU Darmstadt, DE) [dblp]
  • Kai-Uwe Kühnberger (Universität Osnabrück, DE) [dblp]
  • Oliver Kutz (Free University of Bozen-Bolzano, IT) [dblp]
  • Luis C. Lamb (Federal University of Rio Grande do Sul, BR) [dblp]
  • Martha Lewis (University of Oxford, GB) [dblp]
  • Isaac Noble (Playground Global Inc., US) [dblp]
  • Sarah Schulz (Universität Stuttgart, DE) [dblp]
  • Katja Seeliger (Radboud University Nijmegen, NL) [dblp]
  • Luciano Serafini (Bruno Kessler Foundation - Trento, IT) [dblp]
  • Daniel L. Silver (Acadia University - Wolfville, CA) [dblp]
  • Michael Spranger (Sony CSL - Tokyo, JP) [dblp]
  • Keith Stenning (University of Edinburgh, GB) [dblp]
  • Niels A. Taatgen (University of Groningen, NL) [dblp]
  • Serge Thill (University of Skövde, SE) [dblp]
  • Frank Van der Velde (University of Twente - Enschede, NL & Leiden University, NL) [dblp]
  • Tillman Weyde (City, University of London, GB) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 14381: Neural-Symbolic Learning and Reasoning (2014-09-14 - 2014-09-19) (Details)

Klassifikation
  • artificial intelligence / robotics
  • society / human-computer interaction
  • world wide web / internet

Schlagworte
  • human-like computing
  • neural-symbolic integration
  • natural language processing
  • cognitive agents
  • multimodal learning