TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 26282

KR Meets XAI: Bridging Symbolic and Neuro-Symbolic AI for True Explainability

( 05. Jul – 10. Jul, 2026 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/26282

Organisatoren
  • Shqiponja Ahmetaj (TU Wien, AT)
  • Pascal Hitzler (Kansas State University - Manhattan, US)
  • Patrick Koopmann (Vrije Universiteit Amsterdam, NL)
  • Axel-Cyrille Ngonga Ngomo (Universität Paderborn, DE)

Kontakt

Motivation

Explainability remains a key challenge and a focal point of research despite recent breakthroughs in artificial intelligence (AI). This is evidenced by the growing number of workshops, conference tracks, and research initiatives exploring explainability from different perspectives. As a result, a broad landscape of methods and ideas has emerged in many areas of AI and related disciplines. Within this landscape, many approaches for explainable AI (XAI) build on symbolic AI, that is, they use knowledge representation (KR) as a means to achieve explainability. This includes approaches based on argumentation frameworks, abduction methods that identify the most relevant features for a given classification, or approaches that extract description logic axioms from black-box classifiers, or that compute explanations post-hoc using knowledge graphs (KGs). Symbolic learning methods, such as rule mining, are often promoted as more explainable alternatives to purely data-driven methods. Neuro-symbolic AI combines symbolic and sub-symbolic models, which is sometimes motivated as a means to obtain scalable yet explainable systems. All these approaches operate on the premise that incorporating a symbolic layer makes these systems more explainable.

However, while symbolic AI is in theory explainable by design, in reality, the use of symbolic constructs alone does not guarantee explainability for end users. Indeed, there is an increasing amount of research on explanation methods in various areas of symbolic AI such as description logics, datalog and extensions, logic-programming, and planning. The extent to which such approaches can be adapted for tasks beyond traditional KR systems, for instance, for XAI in general, remains to be investigated. On the other hand, research in XAI designed different, sometimes more intuitive explanation strategies to align more with human reasoning, which have yet to be systematically applied to symbolic systems. Insights from XAI might help in making the more formal, logic-based explanations developed for KR not only rigorous but also user-friendly.

The goal of the Dagstuhl Seminar is to bridge the gap between formal KR-based explanations and the broader goals of XAI. By bringing together researchers from both communities, we aim to develop explanation methods that retain formal rigor while becoming more intuitive and usable in real-world applications. The seminar will focus on explanation tasks, methods, and procedures across disciplines; hybrid and neuro-symbolic approaches; visualization and interaction for symbolic systems; and evaluation criteria for explanation quality. A key objective is to establish a shared understanding of current explanation methods, their goals and limitations, and to use this foundation to develop a unified perspective. Through this exchange, we want to identify new explanation tasks and foster cross-disciplinary collaboration in explainability research.

Copyright Shqiponja Ahmetaj, Pascal Hitzler, Patrick Koopmann, and Axel-Cyrille Ngonga Ngomo

Klassifikation
  • Artificial Intelligence
  • Logic in Computer Science
  • Machine Learning

Schlagworte
  • Explainable AI
  • Knowledge Representation
  • Neuro-Symbolic AI
  • Machine Learning