TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 23331

Recent Trends in Graph Decomposition

( 13. Aug – 18. Aug, 2023 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/23331

Organisatoren

Kontakt

Motivation

Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. With even larger instances in applications such as scientific simulation, social networks, or road networks, graph decomposition becomes even more important, multifaceted, and challenging. The seminar is an international forum to discuss recent trends as well as to set new goals and new directions in this research area. The goal of this Dagstuhl Seminar is to bring algorithmic researchers from different communities together who implement, optimize, experiment with algorithms running on large data sets or use techniques from the area frequently, thereby stimulating an exchange of ideas and techniques. The seminar focus will be on graph decomposition. We chose the main topics of our seminar to bring experts together from a wide range of areas – areas that recently have been fruitful in a variety of applications – to tackle some of the most pressing open problems in the area of graph decomposition:

Hardware Design for Dealing with Graphs. Modern processors are optimized for computations that are floating point intensive and have regular memory accesses. Unfortunately, the computations performed by sparse graph algorithms do not benefit from such optimizations. To address this mismatch, new processor and system architectures are being developed. These architectures, exemplified by Intel’s newly announced PUMA architecture, combine highly parallel execution with a word-level accessible global addressable memory. Leveraging the potential of such architectures requires new classes of algorithms.

Beyond Smart Heuristics. In heuristic approaches, improvements in solution quality are often the result of a significant research effort. In recent years, to reduce this effort, develop better heuristics, and ultimately find better solutions, researchers have started developing approaches that use deep neural networks and reinforcement learning in order to learn those heuristics in an end-to-end fashion.

Formulations. Applications that process large sparse data generally have a unique set of optimization requirements for achieving the best performance. Parallelizing such applications on different architectures and/or using different frameworks introduces new performance issues that pertain to these architectures and frameworks. While graphs offer a rich ground for modeling such problems with different requirements, traditional graph decomposition tools may fall short to target those specific issues. This seminar will make more researchers aware of formulations that benefit from the existing state-of-the-art software and encourage them to propose new formulations tailored to their own specific applications.

Scalable Parallel Algorithms for Future Emerging Architectures. Scalable high quality graph partitioning (with quality comparable to sequential partitioners) remains an open problem. With the advent of exascale machines with millions of processors and possibly billions of threads, the situation is further aggravated. Moreover, traditional “flat” partitions of graphs for processing on such machines implies a huge number of blocks. Efficient implementation is also a big issue since complex memory hierarchies and heterogeneity (e.g., GPUs or FPGAs) make the implementation complicated.

Copyright Seher Acer, George Karypis, Christian Schulz, and Darren Strash

Klassifikation
  • Data Structures and Algorithms
  • Distributed / Parallel / and Cluster Computing

Schlagworte
  • experimental algorithmics
  • parallel algorithms
  • combinatorial optimization