19. – 24. Oktober 2003, Dagstuhl Seminar 03431

Hardware and Software Consistency Models: Programmability and Performance


Jens Knoop (TU Wien, AT)
Jaejin Lee (Seoul National University, KR)
Samuel P. Midkiff (Purdue University – West Lafayette, US)
David Padua (University of Illinois – Urbana-Champaign, US)

Auskunft zu diesem Dagstuhl Seminar erteilt

Dagstuhl Service Team




Hardware consistency models define the order that events that occur on one processor, or memory subsystem, appear to occur to other processors or memory subsystems. We use Memory model to refer to the equivalent software concept. A memory model can be defined as part of the semantics of the programming language. The memory model defines the order that memory references in thread of a program, written in the language, should appear to other threads, written in the same language. A memory model defines the order that memory references in a thread of a computer program are mandated by the semantics of a language or other piece of system software to appear to occur in other threads in the computer program. Until recently, these issues were largely the province of specialists who designed memory subsystems and processor cache protocols, implementors of operating systems, and database architects. The design of consistency and memory models was skewed towards providing high performance at the expense of usability or programmability. There are at least two contributing factors for this. First, processors were expensive, and never quite fast enough, requiring performance be maximized. Second, multithreaded programming was used almost exclusively in the design of widely used components such as database systems and operating systems. Thus very labor intensive approaches to programming these consistency models was acceptable. Most ordinary programmers never had to deal with memory consistency issues.

The widespread availability of explicitly parallel programming targeting shared memory systems has changed this equation. In particular, Java, OpenMP, C#, P-Threads, and distributed shared memory systems have forced programmers to be aware of the underlying semantics of the memory model. And, in all of these systems, poor performance, incorrect programs and lack of portability can result from an improper understanding of the underlying model. Thus knowledge that was formerly required of a relatively small number of specialists is now required of large numbers of programmers in fact, required of the typical programmer. Given that the systems written by these typical programmers are not as widely disseminated as the systems written by the specialists, the cost of coping with the vagaries of consistency models is relatively much higher. Moreover, as the complexity of operating systems and middleware grows, the complexity of hardware and consistency models and software memory models leads to subtle errors in the code, degrading software reliability.

These changes in the tradeoffs between programmability and performance in memory models have sparked renewed research into how to design both consistency and memory models. Topics of intense interest include

  • What are the trends in hardware and software consistency models?
  • What is the performance loss associated with moving towards simpler consistency and memory models? How much loss is acceptable?
  • How can hardware consistency models be made simpler for programmers with acceptable losses in performance?
  • What compiler techniques can be used to mask the complexity of hardware consistency models, or mask the performance costs of simpler hardware consistency models?
  • How can memory models be designed to allow programmers to more easily write correct programs? What are the costs of doing this in terms of missed compiler optimization opportunities and additional synchronization overhead in modern out-of-order processors?
  • Can compile-time analyses and optimizations mitigate some of these costs, and if so how?
  • Are heuristic approximations to expensive compile-time analyses sufficient?
  • What idioms and software engineering tools can be used to increase programmability in the face of complex memory models?

We have two large goals for the seminar. First, we would like to foster discussions about the usability and performance requirements of consistency models in the different areas where these are important issues (architecture and hardware, databases, and programming languages) and give knowledgeable members of the fields the opportunity to learn from the experiences of their colleagues in different fields. From these discussions, we hope to come to a better understanding of the tradeoffs and possibilities thatcan be exploited by researchers and practitioners in each of these areas, and to come up with important research questions that will yield broadly applicable results. Because of Dagstuhl's schedule allowing for mix of unstructured discussion in a congenial environment and more formal presentations, we see it as an ideal setting for bringing together members of these different communities to tackle these diffcult issues.


Bücher der Teilnehmer 

Buchausstellung im Erdgeschoss der Bibliothek

(nur in der Veranstaltungswoche).


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.