Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 23122

Deep Continual Learning

( Mar 19 – Mar 24, 2023 )

Please use the following short url to reference this page:



Dagstuhl Reports

As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.

  • Upload (Use personal credentials as created in DOOR to log in)

Shared Documents



Continual learning, also referred to as incremental learning or lifelong learning, is a sub-field of machine learning focusing on the challenging setting where data distributions and/or task specifications vary over time. This includes learning a sequence of tasks as well as learning from data streams. This calls for learning algorithms that can acquire new knowledge over time, with minimal forgetting of what they have learned previously, transfer knowledge across tasks, and smoothly adapt to new circumstances as needed. This contrasts with the traditional setting of machine learning, which largely builds on the premise that all data, both for training and testing, are sampled i.i.d. from a single, stationary data distribution.

Deep learning models in particular are in need of continual learning capabilities. A first reason for this is the strong data-dependence of these models. When trained on a stream of data whose underlying distribution changes over time, deep learning models tend to fully adapt to the most recently seen data, thereby "catastrophically" forgetting the skills they had learned earlier in their training process. Another reason that continual learning capabilities could be especially beneficial for deep learning models, is that they can help deal with the very long training times of these models. The current practice applied in industry, where models are completely re-trained on a regular basis to avoid being outdated, is time inefficient, unsustainable and sub-optimal. Freezing the feature extraction layers is not an option, as the power of deep learning in many challenging applications, be it in computer vision, NLP or audio processing, hinges on the learned representations.

Open research questions we would like to address in this Dagstuhl Seminar include:

  • How do we tackle continual learning at scale on real-world problems, where domain shifts may be unpredictable and data can be long-tailed?
  • To what extent can recent advances in representation learning and insights in model generalisability help continual learning?
  • Rather than relying on tools developed and optimized for machine learning under i.i.d. conditions, should we consider completely different learning strategies?
  • In case old data can be revisited, are there more efficient strategies than simply retraining on all available data over and over again?
  • What are the open challenges in open world learning and automated continual learning, where the agent discovers new tasks by itself, collects its own training data and incrementally learns the new tasks?
  • What can we learn from related fields such as online learning, meta-learning, Bayesian deep learning, robotics and neuroscience?

By aiming to bring together world-class researchers in the field of deep continual learning, as well as in the related fields of online learning, meta-learning, Bayesian deep learning, robotics and neuroscience to discuss and to brainstorm, we plan to set the research agenda for years to come.

Copyright Gido van de Ven, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars


  • Machine Learning

  • continual learning
  • incremental learning