TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 24192

Generalization by People and Machines

( 05. May – 08. May, 2024 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/24192

Organisatoren

Kontakt

Gemeinsame Dokumente



Programm

Press Room

Motivation

Today's AI systems are powerful to the extent that they have largely entered the mainstream and divided the world between those who believe AI will solve all our problems and those who fear that AI will be destructive for humanity. Meanwhile, trusting AI is very difficult given its lack of robustness to novel situations, consistency of its outputs, and interpretability of its reasoning process. Adversarial studies have demonstrated that current AI approaches for tasks like visual object detection and text classification are not as robust as hoped. Models struggle with connecting situations via higher-order similarities, performing commonsense reasoning, and their performance is largely correlated with training data frequencies. Together with informative signals, the models also pick on spurious correlations between terms and annotation biases, while being insensitive to subtle variations like negation. These findings inspired an arms race between the robustifaction of models and breaking their robustness. Building trustworthy AI requires a paradigm shift from the current oversimplified practice of crafting accuracy-driven models to a human-centric design that can enhance human ability on manageable tasks, or enable humans and AIs to solve complex tasks together that are difficult for either separately.

At the core of this problem is the unrivaled human generalization and abstraction ability. While today's AI is able to provide a response to any input, its ability to transfer knowledge to novel situations is still limited by oversimplification practices, as manifested by tasks that involve pragmatics, agent goals, and understanding of narrative structures. It is clear that some generalization is enabled by scaling up data or model complexity, but this idea is hitting a limit, surfacing the idea that something is missing. Recent work has addressed this gap to some extent by proposing modular architectures that involve generating rationales, tracking participant states in narratives, modeling user intent, and including planning objectives in language modeling. Meanwhile, cognitive mechanisms that drive generalization in people, like reasoning by analogy and deriving prototypes are popular in cognitive science research but have not gained mainstream adoption in machine learning techniques. As there are currently no venues that allow cross-disciplinary research on the topic of reliable AI generalization, this discrepancy is problematic and requires dedicated efforts to bring in one place generalization experts from different fields within AI, but also with Cognitive Science.

This Dagstuhl Seminar provides a unique opportunity for discussing the discrepancy between human and AI generalization mechanisms and crafting a vision on how to align the two streams in a compelling and promising way that combines the strengths of both. To ensure an effective seminar, we aim to bring together cross-disciplinary perspectives across computer and cognitive science fields. Our participants will include experts in Interpretable Machine Learning, Neuro-Symbolic Reasoning, Explainable AI, Commonsense Reasoning, Case-based Reasoning, Analogy, Cognitive Science, and Human-Computer Interaction. Specifically, the seminar will focus on the following questions: How can cognitive mechanisms in people be used to inspire generalization in AI? What Machine Learning methods hold the promise to enable such reasoning mechanisms? What is the role of data and knowledge engineering for AI and human generalization? How can we design and model human-AI teams that can benefit from their complementary generalization capabilities? How can we evaluate generalization in humans and AI in a satisfactory manner?

Copyright Barbara Hammer, Filip Ilievski, Sascha Saralajew, and Frank van Harmelen

Teilnehmer

Klassifikation
  • Artificial Intelligence
  • Machine Learning
  • Symbolic Computation

Schlagworte
  • Interpretable Machine Learning
  • Human-AI Collaboration
  • Cognitive Science
  • Neuro-Symbolic Reasoning
  • Explainability