Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Research Meeting 24463

Explainable Decision-Making

( Nov 13 – Nov 15, 2024 )

Please use the following short url to reference this page:




Explainable decision-making refers to methods and practices that allow humans to understand and trust decisions made by systems that use artificial intelligence (AI) and machine learning (ML) components. The topic gained prominence as AI and ML models have become more complex, making their decision-making processes less transparent and harder to interpret. From a computer science perspective, explainable decision-making encompasses several key aspects: (1) transparency for validating the model's correctness and ensuring it operates as intended; (2) interpretability, i.e., the degree to which a human can understand the cause of a decision made by an AI system; (3) explainability by providing understandable reasons for decisions to end-users in a manner that is meaningful to them; and (4) fairness and bias evaluation for ensuring that the AI systems operate fairly across different groups of individuals.

Within the research meeting researchers will present research work and discuss methods on the topic of explainable decision-making in AI, with a special focus on applications in health AI (including genomics), healthcare services, resilience management, and sustainability assessment. The meeting aims to foster a comprehensive understanding, share the latest research findings, and envision future directions in these areas.

Copyright Sabine Janzen and Wolfgang Maaß