Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 24311

Resource-Efficient Machine Learning

( Jul 28 – Aug 02, 2024 )

Please use the following short url to reference this page:



Dagstuhl Seminar Wiki

Shared Documents

  • Upload (Use personal credentials as created in DOOR to log in)


The advances in Machine Learning (ML) are mainly thanks to the exponential evolution of hardware, the availability of the large datasets, and the emergence of machine learning frameworks, which hide the complexities of the underlying hardware, boosting the productivity of data scientists. On the other hand, the computational need of the powerful ML models has increased five orders of magnitude in the past decade. This makes the current rate of increase in model parameters, datasets, and compute budget unsustainable. To achieve more sustainable progress in ML in the future, it is essential to invest in more resource-/energy-/cost-efficient solutions. In this Dagstuhl Seminar, we will explore how to improve ML resource efficiency through software/hardware co-design. We plan to take a holistic view of the ML landscape, which includes data preparation and loading, continual retraining of models in dynamic data environments, compiling ML on specialized hardware accelerators, and serving models for real-time applications with low-latency requirements and constrained resource environments.

This seminar aims at reasoning critically about how we build software and hardware for end-to-end machine learning. We hope that the discussions will lead to increased awareness for understanding the utilization of modern hardware and kickstart future developments to minimize hardware underutilization. We thus would like to bring together academics and industry across fields of data management, machine learning, systems, and computer architecture covering expertise of algorithmic optimizations in machine learning, job scheduling and resource management in distributed computing, parallel computing, and data management and processing. The outcome of the discussions in the seminar will therefore also positively impact the research groups and companies that rely on machine learning.

We have identified the following topics to be discussed during the seminar:

  • Characterization and benchmarking of modern ML techniques
  • Efficient scheduling of ML tasks
  • Measuring sustainability
  • Hardware-software co-design for ML
  • Data pre-processing and loading
Copyright Oana Balmau, Matthias Böhm, Ana Klimovic, Peter R. Pietzuch, and Pinar Tözün


  • Hardware Architecture
  • Machine Learning
  • Operating Systems

  • resource-efficient systems
  • systems for machine learning