TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 23122

Deep Continual Learning

( 19. Mar – 24. Mar, 2023 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/23122

Organisatoren

Kontakt



Programm

Summary

Continual learning, also referred to as lifelong learning, is a sub-field of machine learning that focuses on the challenging problem of incrementally training models for sequentially arriving tasks and/or when data distributions vary over time. Such non-stationarity calls for learning algorithms that can acquire new knowledge over time with minimal forgetting of what they have learned previously, transfer knowledge across tasks, and smoothly adapt to new circumstances as needed. This is in contrast with the traditional setting of machine learning, which typically builds on the premise that all data, both for training and testing, are sampled i.i.d. from a single, stationary data distribution.

Deep learning models in particular are in need of continual learning capabilities. A first reason for this is the strong data-dependence of these models. When trained on a stream of data whose underlying distribution changes over time, deep learning models tend to almost fully adapt to the most recently seen data, thereby "catastrophically" forgetting the skills that have been learned earlier. Second, continual learning capabilities can be especially beneficial for deep learning models as they can help deal with the very long training time of these models. The current practice in industry is to re-train on a regular basis to add new skills and to prevent the knowledge learned previously from being outdated. Re-training is time inefficient, unsustainable and sub-optimal. Freezing the feature extraction layers is often not an option, as the power of deep learning in many challenging applications, be it in computer vision, natural language processing or audio processing, hinges on the learned representations.

The objective of the seminar was to bring together world-class researchers in the field of deep continual learning, as well as in the related fields of online learning, meta-learning, Bayesian deep learning, robotics and neuroscience, to discuss and to brainstorm, and to set the research agenda for years to come.

During the seminar, participants presented new ideas and recent findings from their research in plenary sessions that triggered many interesting discussions. There were also several tutorials that helped create a shared understanding of similarities and differences between continual learning and other related fields. Specifically, the relation with online learning and streaming learning was discussed in detail. Furthermore, there were several breakout discussion sessions in which open research questions and points of controversy within the continual learning field were discussed. An important outcome of the seminar is the shared feeling that the scope and potential benefit of the research on deep continual learning should be communicated better to computer scientists outside of our subfield. Following up on this, most of the seminar participants are currently collaborating on writing a perspective article to do so.

Copyright Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, and Gido van de Ven

Motivation

Continual learning, also referred to as incremental learning or lifelong learning, is a sub-field of machine learning focusing on the challenging setting where data distributions and/or task specifications vary over time. This includes learning a sequence of tasks as well as learning from data streams. This calls for learning algorithms that can acquire new knowledge over time, with minimal forgetting of what they have learned previously, transfer knowledge across tasks, and smoothly adapt to new circumstances as needed. This contrasts with the traditional setting of machine learning, which largely builds on the premise that all data, both for training and testing, are sampled i.i.d. from a single, stationary data distribution.

Deep learning models in particular are in need of continual learning capabilities. A first reason for this is the strong data-dependence of these models. When trained on a stream of data whose underlying distribution changes over time, deep learning models tend to fully adapt to the most recently seen data, thereby "catastrophically" forgetting the skills they had learned earlier in their training process. Another reason that continual learning capabilities could be especially beneficial for deep learning models, is that they can help deal with the very long training times of these models. The current practice applied in industry, where models are completely re-trained on a regular basis to avoid being outdated, is time inefficient, unsustainable and sub-optimal. Freezing the feature extraction layers is not an option, as the power of deep learning in many challenging applications, be it in computer vision, NLP or audio processing, hinges on the learned representations.

Open research questions we would like to address in this Dagstuhl Seminar include:

  • How do we tackle continual learning at scale on real-world problems, where domain shifts may be unpredictable and data can be long-tailed?
  • To what extent can recent advances in representation learning and insights in model generalisability help continual learning?
  • Rather than relying on tools developed and optimized for machine learning under i.i.d. conditions, should we consider completely different learning strategies?
  • In case old data can be revisited, are there more efficient strategies than simply retraining on all available data over and over again?
  • What are the open challenges in open world learning and automated continual learning, where the agent discovers new tasks by itself, collects its own training data and incrementally learns the new tasks?
  • What can we learn from related fields such as online learning, meta-learning, Bayesian deep learning, robotics and neuroscience?

By aiming to bring together world-class researchers in the field of deep continual learning, as well as in the related fields of online learning, meta-learning, Bayesian deep learning, robotics and neuroscience to discuss and to brainstorm, we plan to set the research agenda for years to come.

Copyright Gido van de Ven, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars

Teilnehmer
Vor Ort
  • Rahaf Aljundi (Toyota Motor Europe - Zaventem, BE) [dblp]
  • Matthias Bethge (Universität Tübingen, DE) [dblp]
  • Andrea Cossu (University of Pisa, IT) [dblp]
  • Fabian Fumagalli (Universität Bielefeld, DE) [dblp]
  • Joao Gama (INESC TEC - Porto, PT) [dblp]
  • Alexander Geppert (Hochschule für Angewandte Wissenschaften Fulda, DE) [dblp]
  • Tyler Hayes (NAVER Labs Europe - Meylan, FR) [dblp]
  • Paul Hofman (LMU München, DE)
  • Eyke Hüllermeier (LMU München, DE) [dblp]
  • Christopher Kanan (University of Rochester, US) [dblp]
  • Tatsuya Konishi (KDDI - Saitama, JP) [dblp]
  • Dhireesha Kudithipudi (University of Texas - San Antonio, US) [dblp]
  • Christoph H. Lampert (IST Austria - Klosterneuburg, AT) [dblp]
  • Bing Liu (University of Illinois - Chicago, US) [dblp]
  • Vincenzo Lomonaco (University of Pisa, IT) [dblp]
  • Martin Mundt (TU Darmstadt, DE) [dblp]
  • Razvan Pascanu (DeepMind - London, GB) [dblp]
  • Adrian Popescu (CEA LIST - Nano-INNOV, FR) [dblp]
  • James M. Rehg (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Andreas Tolias (Baylor College of Medicine - Houston, US) [dblp]
  • Tinne Tuytelaars (KU Leuven, BE) [dblp]
  • Gido van de Ven (KU Leuven, BE) [dblp]
  • Joost van de Weijer (Computer Vision Center - Barcelona, ES) [dblp]
  • Eli Verwimp (KU Leuven, BE) [dblp]
  • Michal Zajac (Jagiellonian University - Kraków, PL) [dblp]
Remote:
  • Shai Ben-David (University of Waterloo, CA) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 25432: Deep Continual Learning in the Foundation Model Era (2025-10-19 - 2025-10-24) (Details)

Klassifikation
  • Machine Learning

Schlagworte
  • continual learning
  • incremental learning