TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 15021

Concurrent Computing in the Many-core Era

( Jan 04 – Jan 09, 2015 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/15021

Organizers

Contact



Motivation

Thirty years of improvement in the computational power of CMOS uniprocessors came to an end around 2004, with the near-simultaneous approach of several limits in device technology (feature scaling, frequency, heat dissipation, pin count). The industry has responded with ubiquitous multicore processors, but scalable concurrency remains elusive for many applications, and it now appears likely that the future will be not only massively parallel, but also massively heterogeneous.

Ten years into the multicore era, much progress has been made. C and C++ are now explicitly parallel languages, with a rigorous memory model. Parallel programming libraries (OpenMP, TBB, Cilk++, CnC, GCD, TPL/PLINQ) have become mature enough for widespread commercial use. Graphics Processing Units support general-purpose data-parallel programming (in CUDA, OpenCL, and other languages) for a widening range of fields. Transactional memory appears likely to be incorporated into several programming languages. Software support is available in multiple compilers, and hardware support is being marketed by IBM and Intel, among others.

At the same time, core counts are currently lower than had once been predicted, in part because of a perceived lack of demand, and the prospects for increased core count over time appear to be constrained by the specter of dark silicon. Parallel programming remains difficult for most programmers, tool chains for concurrency remain immature and inconsistent, and pedagogical breakthroughs for the first- and second-year curriculum have yet to materialize. Perhaps most troublesome, it seems increasingly likely that future microprocessors will host scores or even hundreds of heterogeneous computational accelerators, both fixed and field-programmable. Programming for such complex chips is an exceptionally daunting prospect.

The goal of this Dagstuhl seminar is to bring together leading international researchers from both academia and industry working on different aspects of concurrent computing (theory and practice, software and hardware, parallel programming languages, formal models, tools, etc.) in order to:

  • assess the state of the art in concurrency, including formal models, languages, libraries, verification techniques, and tool chains;
  • explore the many potential uses of emerging hardware support for transactional memory and synchronization extensions;
  • envision next-generation hardware mechanisms; and
  • consider potential strategies to harness the anticipated explosion in heterogeneity.

Suggested participants draw from a wide variety of research communities. They regularly publish at more than a dozen different conferences, and seldom have the opportunity to meet together in one place. The seminar will provide a unique forum for researchers with different background to work together on a common research agenda for concurrent computing on new generations of multi- and many-core systems.


Summary

Context and Motivations

Thirty years of improvement in the computational power of CMOS uniprocessors came to an end around 2004, with the near-simultaneous approach of several limits in device technology (feature scaling, frequency, heat dissipation, pin count). The industry has responded with ubiquitous multi-core processors, but scalable concurrency remains elusive for many applications, and it now appears likely that the future will be not only massively parallel, but also massively heterogeneous.

Ten years into the multi-core era, much progress has been made. C and C++ are now explicitly parallel languages, with a rigorous memory model. Parallel programming libraries (OpenMP, TBB, Cilk++, CnC, GCD, TPL/PLINQ) have become mature enough for widespread commercial use. Graphics Processing Units support general-purpose data-parallel programming (in CUDA, OpenCL, and other languages) for a widening range of fields. Transactional memory appears likely to be incorporated into several programming languages. Software support is available in multiple compilers, and hardware support is being marketed by IBM and Intel, among others.

At the same time, core counts are currently lower than had once been predicted, in part because of a perceived lack of demand, and the prospects for increased core count over time appear to be constrained by the specter of dark silicon. Parallel programming remains difficult for most programmers, tool chains for concurrency remain immature and inconsistent, and pedagogical breakthroughs for the first- and second-year curriculum have yet to materialize. Perhaps most troublesome, it seems increasingly likely that future microprocessors will host scores or even hundreds of heterogeneous computational accelerators, both fixed and field-programmable. Programming for such complex chips is an exceptionally daunting prospect.

The goal of this Dagstuhl research seminar was to bring together leading international researchers from both academia and industry working on different aspects of concurrent computing (theory and practice, software and hardware, parallel programming languages, formal models, tools, etc.) in order to:

  • assess the state of the art in concurrency, including formal models, languages, libraries, verification techniques, and tool chains;
  • explore the many potential uses of emerging hardware support for transactional memory and synchronization extensions;
  • envision next-generation hardware mechanisms;
  • consider potential strategies to harness the anticipated explosion in heterogeneity; and
  • investigate the interaction of synchronization and consistency with emerging support for low-latency byte-addressable persistent memory. (This last goal emerged late in the planning process, but became a major topic of discussion.)

Participants came from a wide variety of research communities, which seldom have the opportunity to meet together in one place. The seminar therefore provided a unique opportunity to focus diverse expertise on a common research agenda for concurrent computing on new generations of multi- and many-core systems.

Research Challenges

As part of this seminar, we specifically addressed the following challenges and open research questions, which are the focus of substantial investigation both in academia and in industry. These issues were addressed during the discussion at the workshop from the various perspectives of theory, concurrent algorithms, systems software, and microarchitecture.

The Future of Transactional Memory

With the introduction this past year of TM-capable commodity processors from IBM and Intel, TM research is increasingly turning to the question of how best to use the new hardware. What can and cannot be accomplished with the simple interfaces currently available? What might be accomplished with the addition of non-transactional loads and/or stores within transactions? (And how should such stores behave?) What support might be needed for nested transactions or nested parallelism?

Given that machines without TM will exist for many years, and that HTM will remain bounded by constraints on capacity, associativity, etc., how should hardware and software transactions interact? What hardware extensions might facilitate the construction of hybrid systems? Can hardware transactions be used to accelerate STM? Is TM hardware useful for purposes other than TM?

Beyond these basic questions, how do we integrate TM into the concurrency tool chain? How does one debug a black-box atomic operation? How should TM be embedded into programming languages? Should speculation be visible to the programmer, or should it be hidden within the implementation? How large can transactions reasonably become? Should they remain primarily a means of building concurrent data structures, or should they expand to encompass larger operations-even system-level functions like I/O, thread/process interactions, and crash recovery? As implementations proliferate, are there reasonable models of correctness that move beyond opacity? How should we benchmark TM code? What performance counters should future TM hardware provide to profilers? What kind of infrastructure is needed to perform regression testing of transactional code?

Heterogeneity

GPUs are increasingly regarded as general-purpose computational resources, in platforms ranging from cell phones to supercomputers. Cell phones commonly include additional accelerators as well, for (de)compression, (de)encryption, and media transcoding. These and other accelerators (e.g., for linear algebra, pattern matching, XML parsing, or field-programmable functions) are likely to appear across the computing spectrum over the next few years.

In contrast to traditional (e.g., vector or floating-point) functional units, whose operations are uniformly short, and to traditional I/O devices, whose operations are uniformly long, accelerators can be expected to display a very wide range of response times. Long and variable response times suggest the need for resource management, to promote fair use across threads and applications. Short response times suggest the need for direct, user-level access-as already provided by GPU drivers from nVidia and (soon) AMD.

The prospect of contention for shared accelerators, accessed directly from user-level code, raises a host of questions for concurrent programming. How do we arbitrate shared access? Can traditional notions of locality be extended to accommodate heterogeneity? What happens to the tradeoff between local and remote computation when the alternatives use different instruction sets? What abstract models of progress/performance/time complexity are appropriate? Can operations that employ shared accelerators ever be considered non-blocking? How should we benchmark code that makes use of accelerators? What performance measures should heterogeneous architectures should provide to profilers? What kind of infrastructure is needed to perform regression testing in the face of heterogeneity?

Persistence

Exceptions like magnetic core and battery-backed RAM notwithstanding, mainstream computing has long maintained a firm separation between fast, volatile working memory and slow, non-volatile (persistent) storage. Emerging low-latency, byte-addressable technologies like phase-change memory, memristors, and spin-torque-transfer memory bring this tradition into question. While near-term implementations may simply use low-latency nonvolatile memory as an accelerator for conventional file systems, alternative APIs may prove attractive. Specifically, it seems likely that future systems will give programmers the option of computing directly on persistent state, rather than reading it into working memory, using it there, and writing it out again. This possibility raises variants of many of the issues that have long concerned the concurrency community - consistency and atomicity in particular.

How should pointer-rich, non-file-based data be managed? Will we need automatic garbage collection? What will be the persistent analogues of nonblocking concurrent data structures? How will we ensure linearizability? Composability? A seemingly obvious option would add the 'D' (durability) to transactional memory's ACI (atomicity, consistency, and isolation). With little near-term prospect for integration of persistence and hardware TM, how will we minimize the overheads of persistent STM? What will the tradeoffs look like with respect to lock-based Lock-based programming models? What will be the division of labor between the operating system, runtime, and compiler? What will be the complexity models? Will we count "persistent accesses" the way we currently count remote memory accesses for concurrent objects in memory?

Pedagogy

Once upon a time, concurrency was a specialized topic in the undergraduate curriculum, generally deferred to the operating systems course, or to an upper-level elective of its own. Now it is an essential part of the training of every computer scientist. Yet there is surprisingly little consensus on where it belongs in the curriculum, and how it ought to be taught. Alternatives range from "concurrency first," to infusion throughout the curriculum, to more extensive coverage in a more limited number of courses.

While the principal focus of the seminar was on research issues, participants had the opportunity to share both intuition and experience in the teaching of concurrency, during a dedicated panel session and as part of informal discussions. The following questions were notably discussed. What works, for which kinds of students? What languages and tool chains should we use? What textbooks do we need? What role (if any) should be played by deterministic parallel languages and constructs? Are there approaches, particularly for introductory students, that can offer parallel speedup for important applications, without the full complexity of the general case? Can these approaches reasonably be "staged" into intro-level courses?

Organization of the Seminar

The seminar lasted 5 days, each composed of short scientific presentations, with ample time for discussions, and break-out sessions during which various open questions were discussed in sub-groups. The first day of the seminar started with a general introduction and forward-looking presentations on concurrency and the challenges raised by heterogeneity and virtualization.

Ten technical sessions, with short presentations from the participants, took place during the seminar on:

  • locks and TM;
  • C++ status and standards;
  • memory models;
  • memory management and persistence;
  • performance tuning and verification;
  • distributed concurrency and fault-tolerance;
  • thoughts on concurrency and parallelism;
  • HW and portability;
  • compilers, runtimes, and libraries; and
  • languages and systems.

They were complemented by break-out sessions on "dealing with heterogeneity", the "future of TM", and "persistence", as well as a plenary discussion on "virtualization". Finally, a panel discussion was organized on the topic of "teaching concurrency". The seminar concluded with an open discussion on the future of concurrency and the challenges that will need to be adddressed in coming years.

The topic of the sessions and their diversity illustrate the complexity of the challenges raised by concurrent computing on multi- and many-core systems. As one can expect from such prospective seminars, the discussions raised almost as many new questions as they provided answers on the addressed research challenges. Indeed, while there has been significant advances since the previous seminars (08241 and 12161), notably in terms of hardware support, few of the outstanding problems have been completely solved and new ones have emerged. For instance, hardware support for TM is now available in consumer CPUs but it cannot be used straightforwardly in real applications without relying on hybrid software/hardware strategies, notably to deal with the lack of progress guarantees and the possibility of spurious aborts.

As detailed in the rest of this report, the seminar has allowed the community to make significant progress on a number of important questions pertaining to concurrent computing, while at the same time defining a research agenda for the next few years. Participants provided very positive feedback following the seminar and expressed strong interest in follow-up events. Organizers strongly support the continuation of this series of seminars on concurrent computing, one of the most important and challenging fields in the era of multi- and many-core systems.

Copyright Pascal Felber, J. Eliot B. Moss, Michael Philippsen, and Michael Scott

Participants
  • José Nelson Amaral (University of Alberta - Edmonton, CA) [dblp]
  • Hagit Attiya (Technion - Haifa, IL) [dblp]
  • David F. Bacon (Google - New York, US) [dblp]
  • Annette Bieniusa (TU Kaiserslautern, DE) [dblp]
  • Hans-J. Boehm (Google - Palo Alto, US) [dblp]
  • Daniele Bonetta (Oracle Labs - Linz, AT) [dblp]
  • Sebastian Burckhardt (Microsoft Corporation - Redmond, US) [dblp]
  • Irina Calciu (Brown University - Providence, US) [dblp]
  • Dave Dice (Oracle Corporation - Burlington, US) [dblp]
  • Stephan Diestelhorst (ARM Ltd. - Cambridge, GB) [dblp]
  • Sandhya Dwarkadas (University of Rochester, US) [dblp]
  • Pascal Felber (Université de Neuchâtel, CH) [dblp]
  • Christof Fetzer (TU Dresden, DE) [dblp]
  • Maurice Herlihy (Brown University - Providence, US) [dblp]
  • Antony Hosking (Purdue University - West Lafayette, US) [dblp]
  • Milind Kulkarni (Purdue University - West Lafayette, US) [dblp]
  • Viktor Leis (TU München, DE) [dblp]
  • Charles E. Leiserson (MIT - Cambridge, US) [dblp]
  • Yossf Lev (Oracle Labs., US) [dblp]
  • Alexander Matveev (MIT - Cambridge, US) [dblp]
  • Maged M. Michael (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
  • J. Eliot B. Moss (University of Massachusetts - Amherst, US) [dblp]
  • Erez Petrank (Technion - Haifa, IL) [dblp]
  • Michael Philippsen (Universität Erlangen-Nürnberg, DE) [dblp]
  • Torvald Riegel (Red Hat GmbH - Grassbrun, DE) [dblp]
  • Paolo Romano (INESC-ID - Lisboa, PT) [dblp]
  • Sven-Bodo Scholz (Heriot-Watt University Edinburgh, GB) [dblp]
  • Michael Scott (University of Rochester, US) [dblp]
  • Nir Shavit (MIT - Cambridge, US) [dblp]
  • Michael Swift (University of Wisconsin - Madison, US) [dblp]
  • Gael Thomas (Télécom & Management SudParis - Evry, FR) [dblp]
  • Osman Ünsal (Barcelona Supercomputing Center, ES) [dblp]
  • Martin Vechev (ETH Zürich, CH) [dblp]
  • Jons-Tobias Wamhoff (TU Dresden, DE) [dblp]

Related Seminars
  • Dagstuhl Seminar 08241: Transactional Memory: From Implementation to Application (2008-06-08 - 2008-06-13) (Details)
  • Dagstuhl Seminar 12161: Abstractions for scalable multi-core computing (2012-04-15 - 2012-04-20) (Details)
  • Dagstuhl Seminar 17451: New Challenges in Parallelism (2017-11-05 - 2017-11-10) (Details)

Classification
  • data structures / algorithms / complexity
  • hardware
  • programming languages / compiler

Keywords
  • Multi-/many-core processors
  • Concurrent Programming
  • Synchronization
  • Transactional Memory
  • Programming Languages
  • Compilation