TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24192

Generalization by People and Machines

( May 05 – May 08, 2024 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24192

Organizers

Contact

Dagstuhl Seminar Wiki

Shared Documents

Schedule

Motivation

Today's AI systems are powerful to the extent that they have largely entered the mainstream and divided the world between those who believe AI will solve all our problems and those who fear that AI will be destructive for humanity. Meanwhile, trusting AI is very difficult given its lack of robustness to novel situations, consistency of its outputs, and interpretability of its reasoning process. Adversarial studies have demonstrated that current AI approaches for tasks like visual object detection and text classification are not as robust as hoped. Models struggle with connecting situations via higher-order similarities, performing commonsense reasoning, and their performance is largely correlated with training data frequencies. Together with informative signals, the models also pick on spurious correlations between terms and annotation biases, while being insensitive to subtle variations like negation. These findings inspired an arms race between the robustifaction of models and breaking their robustness. Building trustworthy AI requires a paradigm shift from the current oversimplified practice of crafting accuracy-driven models to a human-centric design that can enhance human ability on manageable tasks, or enable humans and AIs to solve complex tasks together that are difficult for either separately.

At the core of this problem is the unrivaled human generalization and abstraction ability. While today's AI is able to provide a response to any input, its ability to transfer knowledge to novel situations is still limited by oversimplification practices, as manifested by tasks that involve pragmatics, agent goals, and understanding of narrative structures. It is clear that some generalization is enabled by scaling up data or model complexity, but this idea is hitting a limit, surfacing the idea that something is missing. Recent work has addressed this gap to some extent by proposing modular architectures that involve generating rationales, tracking participant states in narratives, modeling user intent, and including planning objectives in language modeling. Meanwhile, cognitive mechanisms that drive generalization in people, like reasoning by analogy and deriving prototypes are popular in cognitive science research but have not gained mainstream adoption in machine learning techniques. As there are currently no venues that allow cross-disciplinary research on the topic of reliable AI generalization, this discrepancy is problematic and requires dedicated efforts to bring in one place generalization experts from different fields within AI, but also with Cognitive Science.

This Dagstuhl Seminar provides a unique opportunity for discussing the discrepancy between human and AI generalization mechanisms and crafting a vision on how to align the two streams in a compelling and promising way that combines the strengths of both. To ensure an effective seminar, we aim to bring together cross-disciplinary perspectives across computer and cognitive science fields. Our participants will include experts in Interpretable Machine Learning, Neuro-Symbolic Reasoning, Explainable AI, Commonsense Reasoning, Case-based Reasoning, Analogy, Cognitive Science, and Human-Computer Interaction. Specifically, the seminar will focus on the following questions: How can cognitive mechanisms in people be used to inspire generalization in AI? What Machine Learning methods hold the promise to enable such reasoning mechanisms? What is the role of data and knowledge engineering for AI and human generalization? How can we design and model human-AI teams that can benefit from their complementary generalization capabilities? How can we evaluate generalization in humans and AI in a satisfactory manner?

Copyright Barbara Hammer, Filip Ilievski, Sascha Saralajew, and Frank van Harmelen

Participants

Classification
  • Artificial Intelligence
  • Machine Learning
  • Symbolic Computation

Keywords
  • Interpretable Machine Learning
  • Human-AI Collaboration
  • Cognitive Science
  • Neuro-Symbolic Reasoning
  • Explainability