TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23331

Recent Trends in Graph Decomposition

( Aug 13 – Aug 18, 2023 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23331

Organizers

Contact

Motivation

Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. With even larger instances in applications such as scientific simulation, social networks, or road networks, graph decomposition becomes even more important, multifaceted, and challenging. The seminar is an international forum to discuss recent trends as well as to set new goals and new directions in this research area. The goal of this Dagstuhl Seminar is to bring algorithmic researchers from different communities together who implement, optimize, experiment with algorithms running on large data sets or use techniques from the area frequently, thereby stimulating an exchange of ideas and techniques. The seminar focus will be on graph decomposition. We chose the main topics of our seminar to bring experts together from a wide range of areas – areas that recently have been fruitful in a variety of applications – to tackle some of the most pressing open problems in the area of graph decomposition:

Hardware Design for Dealing with Graphs. Modern processors are optimized for computations that are floating point intensive and have regular memory accesses. Unfortunately, the computations performed by sparse graph algorithms do not benefit from such optimizations. To address this mismatch, new processor and system architectures are being developed. These architectures, exemplified by Intel’s newly announced PUMA architecture, combine highly parallel execution with a word-level accessible global addressable memory. Leveraging the potential of such architectures requires new classes of algorithms.

Beyond Smart Heuristics. In heuristic approaches, improvements in solution quality are often the result of a significant research effort. In recent years, to reduce this effort, develop better heuristics, and ultimately find better solutions, researchers have started developing approaches that use deep neural networks and reinforcement learning in order to learn those heuristics in an end-to-end fashion.

Formulations. Applications that process large sparse data generally have a unique set of optimization requirements for achieving the best performance. Parallelizing such applications on different architectures and/or using different frameworks introduces new performance issues that pertain to these architectures and frameworks. While graphs offer a rich ground for modeling such problems with different requirements, traditional graph decomposition tools may fall short to target those specific issues. This seminar will make more researchers aware of formulations that benefit from the existing state-of-the-art software and encourage them to propose new formulations tailored to their own specific applications.

Scalable Parallel Algorithms for Future Emerging Architectures. Scalable high quality graph partitioning (with quality comparable to sequential partitioners) remains an open problem. With the advent of exascale machines with millions of processors and possibly billions of threads, the situation is further aggravated. Moreover, traditional “flat” partitions of graphs for processing on such machines implies a huge number of blocks. Efficient implementation is also a big issue since complex memory hierarchies and heterogeneity (e.g., GPUs or FPGAs) make the implementation complicated.

Copyright Seher Acer, George Karypis, Christian Schulz, and Darren Strash

Classification
  • Data Structures and Algorithms
  • Distributed / Parallel / and Cluster Computing

Keywords
  • experimental algorithmics
  • parallel algorithms
  • combinatorial optimization