04.01.15 - 09.01.15, Seminar 15021

Concurrent Computing in the Many-core Era

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.

Motivation

Thirty years of improvement in the computational power of CMOS uniprocessors came to an end around 2004, with the near-simultaneous approach of several limits in device technology (feature scaling, frequency, heat dissipation, pin count). The industry has responded with ubiquitous multicore processors, but scalable concurrency remains elusive for many applications, and it now appears likely that the future will be not only massively parallel, but also massively heterogeneous.

Ten years into the multicore era, much progress has been made. C and C++ are now explicitly parallel languages, with a rigorous memory model. Parallel programming libraries (OpenMP, TBB, Cilk++, CnC, GCD, TPL/PLINQ) have become mature enough for widespread commercial use. Graphics Processing Units support general-purpose data-parallel programming (in CUDA, OpenCL, and other languages) for a widening range of fields. Transactional memory appears likely to be incorporated into several programming languages. Software support is available in multiple compilers, and hardware support is being marketed by IBM and Intel, among others.

At the same time, core counts are currently lower than had once been predicted, in part because of a perceived lack of demand, and the prospects for increased core count over time appear to be constrained by the specter of dark silicon. Parallel programming remains difficult for most programmers, tool chains for concurrency remain immature and inconsistent, and pedagogical breakthroughs for the first- and second-year curriculum have yet to materialize. Perhaps most troublesome, it seems increasingly likely that future microprocessors will host scores or even hundreds of heterogeneous computational accelerators, both fixed and field-programmable. Programming for such complex chips is an exceptionally daunting prospect.

The goal of this Dagstuhl seminar is to bring together leading international researchers from both academia and industry working on different aspects of concurrent computing (theory and practice, software and hardware, parallel programming languages, formal models, tools, etc.) in order to:

  • assess the state of the art in concurrency, including formal models, languages, libraries, verification techniques, and tool chains;
  • explore the many potential uses of emerging hardware support for transactional memory and synchronization extensions;
  • envision next-generation hardware mechanisms; and
  • consider potential strategies to harness the anticipated explosion in heterogeneity.

Suggested participants draw from a wide variety of research communities. They regularly publish at more than a dozen different conferences, and seldom have the opportunity to meet together in one place. The seminar will provide a unique forum for researchers with different background to work together on a common research agenda for concurrent computing on new generations of multi- and many-core systems.