TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 24311

Resource-Efficient Machine Learning

( 28. Jul – 02. Aug, 2024 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/24311

Organisatoren

Kontakt

Motivation

The advances in Machine Learning (ML) are mainly thanks to the exponential evolution of hardware, the availability of the large datasets, and the emergence of machine learning frameworks, which hide the complexities of the underlying hardware, boosting the productivity of data scientists. On the other hand, the computational need of the powerful ML models has increased five orders of magnitude in the past decade. This makes the current rate of increase in model parameters, datasets, and compute budget unsustainable. To achieve more sustainable progress in ML in the future, it is essential to invest in more resource-/energy-/cost-efficient solutions. In this Dagstuhl Seminar, we will explore how to improve ML resource efficiency through software/hardware co-design. We plan to take a holistic view of the ML landscape, which includes data preparation and loading, continual retraining of models in dynamic data environments, compiling ML on specialized hardware accelerators, and serving models for real-time applications with low-latency requirements and constrained resource environments.

This seminar aims at reasoning critically about how we build software and hardware for end-to-end machine learning. We hope that the discussions will lead to increased awareness for understanding the utilization of modern hardware and kickstart future developments to minimize hardware underutilization. We thus would like to bring together academics and industry across fields of data management, machine learning, systems, and computer architecture covering expertise of algorithmic optimizations in machine learning, job scheduling and resource management in distributed computing, parallel computing, and data management and processing. The outcome of the discussions in the seminar will therefore also positively impact the research groups and companies that rely on machine learning.

We have identified the following topics to be discussed during the seminar:

  • Characterization and benchmarking of modern ML techniques
  • Efficient scheduling of ML tasks
  • Measuring sustainability
  • Hardware-software co-design for ML
  • Data pre-processing and loading
Copyright Oana Balmau, Matthias Böhm, Ana Klimovic, Peter R. Pietzuch, and Pinar Tözün

Klassifikation
  • Hardware Architecture
  • Machine Learning
  • Operating Systems

Schlagworte
  • resource-efficient systems
  • systems for machine learning