Dagstuhl Seminar 17451
New Challenges in Parallelism
( Nov 05 – Nov 10, 2017 )
- Annette Bieniusa (TU Kaiserslautern, DE)
- Hans-J. Boehm (Google - Palo Alto, US)
- Maurice Herlihy (Brown University - Providence, US)
- Erez Petrank (Technion - Haifa, IL)
- Andreas Dolzmann (for scientific matters)
- Annette Beyer (for administrative matters)
A continuing goal of current multiprocessor software design is to improve the performance and reliability of parallel algorithms. Parallel programming has traditionally been attacked from widely different angles by different groups of people: Hardware designers designing instruction sets, programming language designers designing languages and library interfaces, and theoreticians developing models of parallel computation. Unsurprisingly, this has not always led to consistent results. Newly developing areas show every sign of leading to similar divergence. This Dagstuhl Seminar will bring together researchers and practitioners from all three areas to discuss and reconcile thoughts on these challenges.
Memory Models and Platforms
Fundamental questions about the semantics of shared memory remain. For example, it becomes increasingly clear that atomic accesses to variables without memory ordering guarantees, or with very weak ordering guarantees, are important in practice. It is surprisingly common to find data structures, such as simple counters, that effectively consist of a single machine word. These continue to be “supported” in languages like Java and C++, but there remains no generally accepted way of defining their semantics, and the specifications in these languages are clearly inadequate. Fundamental questions about memory models and concurrent data structures continue to be unresolved. Many Java concurrent data structures provide weaker than interleaving (”sequentially consistent”) semantics that can only be fully understood with a thorough understanding of the memory model. This fact seems to be neither widely appreciated nor discussed. Are the “acquire/release” semantics often used in practice sufficient? Could we afford the overhead of providing the programmer with a simpler model? The more theoretical side of our discipline often uses concepts, such as “safe” and “regular” registers that are quite foreign to the way in which parallel programming languages are actually defined. Are these notions reconcilable?
Non-Volatile Memory and Concurrency
Non-volatile memory (NVM) technologies are expected to support persistence in byte- addressable memory at densities higher than DRAM and at competitive speed. It is expected that NVM will unify the DRAM and SSD into a one-level storage system of persistent main memory with no need for a hard drive, and directly accessible from the programming language. Such a change in the platforms has a significant impact on the design of software and in particular on concurrent algorithms. One implication is that standard functionalities need to be written for a single-tier memory rather then the standard two-tier paradigm. The design of widely available applications, such as database systems, assume that two-tier memory levels are present, and optimizations are based on the fact that these two memory levels have very different behaviors. Concurrency needs to be re-thought in the presence of the new memory structure. Another implication is that software is now expected to deal with persistence. This has strong connections to thread synchronization issues, but has not been traditionally studied with concurrent algorithms. Addressing these challenges is crucial for building systems on non-volatile memories and we would like to explore potential solutions in the Seminar.
Improving the performance and reliability of parallel and concurrent programs is an ongoing topic in multiprocessor software and hardware research. Despite numerous efforts, the semantics of weak memory models remains subtle and fragile. There has not been established a generally accepted way of defining their semantics, and the specifications of programming languages supporting weak memory models with shared accesses are clearly inadequate. In addition, new advances in hardware are adding further complexity. For example, recently, non-volatile memory (NVM) generated a lot of interest in different communities: Hardware designers coming up with instruction sets and layouts of NVM, system designers integrating NVM into the storage stack, programming language designers proposing library interfaces, and theoreticians developing a new theory of persistence under concurrent access and algorithms adapted for persistency.
This Dagstuhl Seminar on "Future Challenges in Parallelism" touched on different aspects on topics in this broad area of research. In this report, we briefly give a summary of the presentations and discussions that took place during the seminar.
Presentations started with an introductory broad overview talk on the non-volatile memory technology. This survey was followed by shorter talks ranging from hardware techniques for efficiently controlling write-back ordering from caches to theoretical foundations and design of specific data structures.
There was agreement that non-volatile memory is likely to become commercially important in the near future, and that it is tempting to exploit it to provide persistence of user data structures. However, there was little agreement on detailed assumptions and direction. The emphasis of the presentations was on manually designed data structures programmed to a near-hardware-level interface. Some participants expressed concerns that this was too low-level and that the community should instead focus on constructs at the level of durable transactions. Transactional semantics are likely to play an important role: When restarting an application from its persisted state, this state must be consistent in order to prevent data corruption and loss. Much of the work presented in the workshop assumed that we will have non-volatile memory combined with visibly volatile caches that require explicit flush operations to persist data. But it was pointed out that the problem could be greatly simplified by either providing sufficient battery backup to ensure that the entire cache is flushed on failure, or providing other hardware support.
Some of the participants discussed the definitions of correctness. On the one hand, the standard definition of durable linearizability is a strong requirement that typically brings a large performance overhead. On the other hand, the weaker buffered linearizability does not compose well. Other participants suggested some hardware modifications that could make the life of the programmer easier. For example, a discussion emerged on whether we could pin a cache line to make sure it is not written back to memory. We also tackled the programmability of systems with non-volatile memories. How difficult should it be to program them? Are application programmers expected to employ it directly or only via dedicated data structures provided in libraries? The experience report of porting the application memcached to non-volatile memory raised a lot of interest with the participants. It turned out that the task was rather difficult due to complex interactions between the different modules in the application, in particular between modules that required persistence and modules that did not. The lack of tools was strongly felt, and the obtained performance was not satisfactory. The conclusion was that applications had better be redesigned from scratch to work with non-volatile memory. The general feeling at the end of the seminar was that we are in the beginning of exciting times for research on non-volatile memories and that the discussions must and will continue.
Memory models formed the second major thread of presentations and discussions, with participants expressing widely different viewpoints and technical directions. At one extreme, Madan Musuvathi presented evidence that a simple interleaving-based "sequentially consistent" semantics can be provided at reasonable cost, together with an argument that this is a good direction for future programming languages. At the other extreme, Viktor Vafeiadis argued that a weaker "acquire-release" memory model is easier to reason about, an argument that was backed up by model-checking time measurements. Needless to say, this was followed by lively discussion resulting, we believe in at least a more thorough understanding of different perspectives by everyone involved. There were also several brief presentations and extensive discussion on different approaches for addressing the long-standing C++ and Java (among others) out-of-thin-air problem. Current semantics for these languages allow outcomes that are universally accepted as absurd, but which we do not know how to prohibit in any precise way. It is clear that none of the solutions are quite ready to be adopted, but there are encouraging results along several different paths. There is a consensus that this problem makes formal reasoning about programs nearly impossible and that it is a serious obstruction for tool development. There was less consensus about the extent to which it obstructs day-to-day programming efforts.
In conclusion, the seminar inspired discussions and proposed challenging problems to tackle for the research community. As the discussions showed, designing sound and performant parallel systems require the cooperation of researchers on hardware and software level, with both theoretical and practical analyses and evaluations.
- Sarita Adve (University of Illinois - Urbana-Champaign, US) [dblp]
- Deepthi Devaki Akkoorath (TU Kaiserslautern, DE) [dblp]
- Hagit Attiya (Technion - Haifa, IL) [dblp]
- Naama Ben-David (Carnegie Mellon University - Pittsburgh, US) [dblp]
- Annette Bieniusa (TU Kaiserslautern, DE) [dblp]
- Hans-J. Boehm (Google - Palo Alto, US) [dblp]
- Irina Calciu (VMware - Palo Alto, US) [dblp]
- Nachshon Cohen (EPFL - Lausanne, CH) [dblp]
- Dave Dice (Oracle Corp. - Burlington, US) [dblp]
- Sandhya Dwarkadas (University of Rochester, US) [dblp]
- Panagiota Fatourou (University of Crete - Heraklion, GR) [dblp]
- Pascal Felber (University of Neuchâtel, CH) [dblp]
- Michal Friedman (Technion - Haifa, IL) [dblp]
- Tim Harris (Oracle Labs - Cambridge, GB) [dblp]
- Danny Hendler (Ben Gurion University - Beer Sheva, IL) [dblp]
- Maurice Herlihy (Brown University - Providence, US) [dblp]
- Antony Hosking (Australian National University - Canberra, AU) [dblp]
- Idit Keidar (Technion - Haifa, IL) [dblp]
- Alex Kogan (Oracle Labs - Burlington, US) [dblp]
- Petr Kuznetsov (Télécom ParisTech, FR) [dblp]
- William M. Leiserson (MIT - Cambridge, US) [dblp]
- Yossf Lev (Oracle Labs - Burlington, US) [dblp]
- Victor Luchangco (Oracle Labs - Burlington, US) [dblp]
- Virendra Marathe (Oracle Corp. - Burlington, US) [dblp]
- Maged M. Michael (Facebook - New York, US) [dblp]
- J. Eliot B. Moss (University of Massachusetts - Amherst, US) [dblp]
- Madan Musuvathi (Microsoft Research - Redmond, US) [dblp]
- Vijay Nagarajan (University of Edinburgh, GB) [dblp]
- Erez Petrank (Technion - Haifa, IL) [dblp]
- Michael Philippsen (Universität Erlangen-Nürnberg, DE) [dblp]
- Mirko Rahn (Fraunhofer ITWM - Kaiserslautern, DE) [dblp]
- Michael Scott (University of Rochester, US) [dblp]
- Marc Shapiro (University Pierre & Marie Curie - Paris, FR) [dblp]
- Nir Shavit (MIT - Cambridge, US) [dblp]
- Matthew Sinclair (AMD Research - Bellevue, US) [dblp]
- Michael F. Spear (Lehigh University - Bethlehem, US) [dblp]
- Viktor Vafeiadis (MPI-SWS - Kaiserslautern, DE) [dblp]
- Haris Volos (HP Labs - Palo Alto, US) [dblp]
- Heike Wehrheim (Universität Paderborn, DE) [dblp]
- Reinhard Wilhelm (Universität des Saarlandes, DE) [dblp]
- Felix Wolf (TU Darmstadt, DE) [dblp]
- Sebastian Wolff (TU Braunschweig, DE) [dblp]
- Dagstuhl Seminar 08241: Transactional Memory: From Implementation to Application (2008-06-08 - 2008-06-13) (Details)
- Dagstuhl Seminar 12161: Abstractions for scalable multi-core computing (2012-04-15 - 2012-04-20) (Details)
- Dagstuhl Seminar 15021: Concurrent Computing in the Many-core Era (2015-01-04 - 2015-01-09) (Details)
- data structures / algorithms / complexity
- programming languages / compiler
- Concurrent programming
- parallel programming