https://www.dagstuhl.de/15191

### May 3 – 8 , 2015, Dagstuhl Seminar 15191

# Compositional Verification Methods for Next-Generation Concurrency

## Organizers

Lars Birkedal (Aarhus University, DK)

Derek Dreyer (MPI-SWS – Saarbrücken, DE)

Philippa Gardner (Imperial College London, GB)

Zhong Shao (Yale University, US)

## For support, please contact

## Documents

Due to maintenance activity, guest services will be restricted or not available on January 26th 2022 from 2 PM to 8 PM UTC. We regret the inconvenience caused.

Dagstuhl Report, Volume 5, Issue 5

Aims & Scope

List of Participants

Dagstuhl's Impact: Documents available

## Summary

One of the major open problems confronting software developers today
is how to cope with the complexity of reasoning about large-scale
concurrent programs. Such programs are increasingly important as a
means of taking advantage of parallelism in modern architectures.
However, they also frequently depend on subtle invariants governing
the use of shared mutable data structures, which must take into
account the potential interference between different threads accessing
the state simultaneously. Just figuring out how to express such
invariants at all has proven to be a very challenging problem; even
more challenging is how to support *local* reasoning about such
invariants, i.e., confining the reasoning about them to only the
components of the program that absolutely need to know about them.

Fortunately, we are now at a point where verification research has
produced the critical foundations needed to tackle this problem:
namely, *compositional* methods, which exploit the inherently
modular structure of realistic concurrent programs in order to
decompose verification effort along module boundaries. Fascinatingly,
a variety of different but related compositional methods
have been developed contemporaneously in the last several years:

**Separation logics:**Separation logic was developed initially as a generalization of Hoare logic – supporting local, compositional reasoning about sequential, heap-manipulating programs – and much of the early work on separation logic has been successfully incorporated into automated verification tools like Smallfoot [2], SLAyer [3], Abductor [6], etc., scaling to handle millions of lines of code. Recently, there have been a series of breakthroughs in adapting separation logic to handle concurrent programs as well. Concurrent separation logic [17] provides course-grained local reasoning about concurrent programs; combining this local reasoning with rely-guarantee reasoning [26] provides fine-grained concurrent reasoning; intertwining abstraction with local reasoning enables a client to reason about the use of a set module [8] without having to think about the underlying implementation using lists or concurrent B-trees; and, very recently, all this has been extended to account for higher-order programs as well [21].**Kripke models:**There is a long line of work on the use of semantic models like Kripke logical relations [1, 9] (and more recently bisimulations [19, 20]) for proving observational equivalence of programs that manipulate local state. Observational equivalence is useful not only for establishing correctness of program transformations (e.g., in compiler certification) but also as a verification method in its own right (e.g., one can prove that a complex but efficient implementation of an ADT is equivalent to a simple but inefficient reference implementation). However, it is only in the last few years that such models have been generalized to account for the full panoply of features available in modern languages: higher-order state, recursion, abstract types, control operators, and most recently concurrency, resulting in some of the first formal proofs of correctness of sophisticated fine-grained concurrent algorithms in a higher-order setting [1, 9, 23]. These advances have come about thanks to the development of more elaborate Kripke structures for representing invariants on local state.**Hoare type theory:**Dependent type theory provides a very expressive compositional verification system for higher-order functional programs, so expressive that types can characterize full functional correctness. Traditionally, however, dependent type theories were limited to verification of pure programs. Recent work on Hoare type theory (HTT) [15] has shown how to integrate effects into dependent type theory by incorporating Hoare triples as a new primitive type, and prototypes of HTT have been implemented in Coq [7, 16], allowing for imperative programs to be verified mechanically as they are being written. Moreover, first steps of extending HTT with concurrency have recently been taken [14], thus giving hope for a potential future integration of design and verification for higher-order concurrent programs.

All in all, the field of modular concurrency verification is highly active, with groundbreaking new developments in these and other approaches coming out every year. Particularly fascinating is the appearance of deep connections between the different methods. There are striking similarities, for instance, between the advanced Kripke structures used in recent relational models of higher-order state and the semantic models underlying recent concurrent separation logics.

Nevertheless, there are a number of ways in which the advanced models and logics developed thus far are still, to be honest, in their infancy. Most of these approaches, for example, have only been applied to the verification of small, self-contained ADTs and have not yet been scaled up to verify large-scale modular concurrent programs. Moreover, even the most state-of-the-art compositional methods do not yet account for a number of the essential complexities of concurrent programming as it is practiced today, including:

**Weak memory models:**The vast majority of state-of-the-art compositional verification methods are proved sound with respect to an operational semantics that assumes a sequentially consistent memory model. However, modern hardware implements weak memory models that allow for many more reorderings of basic operations. Thus there is a clear gap between the verification theory and practice that needs to be filled (for efficiency reasons we, of course, do not want to force programmers/compilers to insert enough memory fence operations to make the hardware behave sequentially consistent). This problem has been known for the last decade, but it is only in the last year or two that formal descriptions of the behavior of programming languages with weak memory models have been developed. Given this foundation, we should now be able to make progress on extending compositional verification methods to weak memory models.**Higher-order concurrency:**Higher-order functional abstraction is an indispensable feature of most modern, high-level programming languages. It is also central to a variety of concurrent programming idioms, both established and nascent: work stealing [4], Concurrent ML-style events [18], concurrent iterators [13], parallel evaluation strategies [22], STM [11], reagents [24], and more. Yet, only a few existing logics have been proposed that even attempt to account for higher-order concurrency [21, 14, 12], and these logics are just first steps – for example, they do not presently account for sophisticated “fine-grained” concurrent ADTs. Verification of higher-order concurrent programs remains a largely open problem.**Generalizing linearizability:**Sophisticated concurrent data structures often use finegrained synchronization to maximize the possibilities for parallel access. The classical correctness criterion for such fine-grained data structures is linearizability, which ensures that every operation has a linearization point at which it appears (to clients) to atomically take effect. However, existing logics do not provide a way to exploit linearizability directly in client-side reasoning, and moreover the notion does not scale naturally to account for operations (such as higher-order iterators) whose behavior is not semantically atomic. Recently, researchers have started to investigate alternative approaches, based on contextual refinement [10, 23]. And methods for reasoning about operations with multiple linearizability points are also being developed.**Liveness properties:**Synchronization of concurrent data structures can also affect the progress of the execution of the client threads. Various progress properties have been proposed for concurrent objects. The most important ones are wait-freedom, lock-freedom and obstruction-freedom for non-blocking implementations, and starvation-freedom and deadlock-freedom for lock-based implementations. These properties describe conditions under which method calls are guaranteed to successfully complete in an execution. Traditional definitions (which are quite informal) of these progress properties are difficult to use in modular program verification because they fail to describe how the progress properties affect clients. It is also unclear how existing separation logics, which were primarily designed for proving partial correctness, can be adapted to prove progress properties. Recently, researchers have started to combine quantitative reasoning of resource bounds with separation logics, which offer new possibilities for verifying both safety and liveness properties in a single framework.

Grappling with these kinds of limitations is essential if our verification technology is to be relevant to real-world programs running on modern architectures, and as such it poses exciting new research questions that we as a community are just beginning to explore.

In this seminar, we brought together a wide variety of researchers on concurrency verification, as well as leading experts on concurrent software development in both high- and low-level languages. The goal was to facilitate a stimulating interchange between the theory and practice of concurrent programming, and thereby foster the development of compositional verification methods that can scale to handle the realities of next-generation concurrency.

Among the concrete research challenges investigated in depth during the seminar are the following:

- What are good ways of reasoning about weak memory models? It should be possible to reason about low-level programs that exploit weak memory models (e.g., locks used inside operating systems) but also to reason at higher levels of abstractions for programs that use sufficient locking.
- What is the best way to define a language-level memory model that is nevertheless efficiently implementable on modern hardware. C11 is the state of the art, but it is flawed in various ways, and we heard about a number of different ways of possibly fixing it.
- What is the best way to mechanize full formal verification of concurrent programs, using interactive proof assistants, such as Coq.
- How can we adapt existing and develop new compositional techniques for reasoning about liveness properties of concurrent programs? Can we apply quantitative techniques to reduce the proof of a liveness property to the proof of a stronger safety property? Also, recent work on rely-guarantee-based simulation can prove linearizability of a sophisticated concurrent object by showing the concurrent implementation is a contextual refinement of its sequential specification. We would hope that similar techniques can be used to prove progress properties as well.
- Only recently have researchers begun to propose logics and models for higher-order concurrency [23, 21]. What are the right concurrency abstractions for higher-order concurrent programming idioms as diverse as transactional memory [11], Concurrent ML [18], joins [25], and reagents [24], among others? What is the best way to even specify, let alone verify, programs written in these idioms, and are there unifying principles that would apply to multiple different idioms?
- Most verification work so far has focused on shared-memory concurrency, with little attention paid to message-passing concurrency (except for some recent work on verifying the C# joins library). Can the models and logics developed for the former be carried over usefully to the latter, and what is the connection (if any) with recent work on proof-theoretic accounts of session types [5]? Can session types help to simplify reasoning about some classes of concurrent programs, e.g., those that only involve some forms of message passing and not full shared memory?
- A number of recent Kripke models and separation logics have employed protocols of various forms to describe the invariants about how the semantic state of a concurrent ADT can evolve over time. But different approaches model protocols differently, e.g., using enriched forms of state transition systems vs. partial commutative monoids. Is there a canonical way of representing these protocols formally and thus better understanding the relationship between different proof methods?
- There seem to be tradeoffs between approaches to concurrency verification based on Hoare logic vs. refinement (unary vs. relational reasoning), with the former admitting a wider variety of formal specifications but the latter offering better support for reasoning about atomicity. Consequently, a number of researchers are actively working on trying to combine both styles of reasoning in a unified framework. What is the best way to do this?
- To what extent do we need linearizability to facilitate client-side reasoning? Is it possible in many cases for clients to rely on a much weaker specification ? And which ways are there to formalize looser notions, e.g. where there are multiple linearization points?
- Now that we are finally developing logics and models capable of verifying realistic concurrent algorithms, can we abstract away useful proof patterns and automate them? What is needed in order to integrate support for concurrent invariants into automated verification tools like SLAyer and Abductor?

These different challenges were discussed through talks and discussions by participants, see the list of talk abstracts below.

### References

- Amal Ahmed, Derek Dreyer, and Andreas Rossberg. State-dependent representation independence. In
*POPL*, 2009. - Josh Berdine, Cristiano Calcagno, and Peter W. O’Hearn. Smallfoot: Modular automatic assertion checking with separation logic. In
*FMCO,*2005. - Josh Berdine, Byron Cook, and Samin Ishtiaq. Slayer: Memory safety for systems-level code. In
*CAV*, 2011. - Robert D. Blumofe, Christopher F. Joerg, Bradley C. Kuszmaul, Charles E. Leiserson, Keith H. Randall, and Yuli Zhou. Cilk: An efficient multithreaded runtime system.
*JPDC*, 37(1):55–69, August 1996. - Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In
*CONCUR*, 2010. - Cristiano Calcagno, Dino Distefano, Peter O’Hearn, and Hongseok Yang. Compositional shape analysis by means of bi-abduction.
*Journal of the ACM*, 58(6), 2011. - Adam Chlipala, Gregory Malecha, Greg Morrisett, Avraham Shinnar, and Ryan Wisnesky. Effective interactive proofs for higher-order imperative programs. In
*ICFP*, 2009. - Thomas Dinsdale-Young, Mike Dodds, Philippa Gardner, Matthew Parkinson, and Viktor Vafeiadis. Concurrent abstract predicates. In
*ECOOP*, June 2010. - Derek Dreyer, Georg Neis, and Lars Birkedal. The impact of higher-order state and control effects on local relational reasoning.
*Journal of Functional Programming,*22(4&5):477–528, August 2012. - I. Filipovic, P.W. O’Hearn, N. Rinetzky, and H. Yang. Abstraction for concurrent objects. In
*ESOP*, 2009. - Tim Harris, Simon Marlow, Simon Peyton-Jones, and Maurice Herlihy. Composable memory transactions. In
*PPoPP*, 2005. - Aquinas Hobor, Andrew W. Appel, and Francesco Zappa Nardelli. Oracle semantics for concurrent separation logic. In
*ESOP*, 2008. - Doug Lea. The java.util.concurrent ConcurrentHashMap.
- Ruy Ley-Wild and Aleksandar Nanevski. Subjective auxiliary state for coarse-grained concurrency. In
*POPL*, 2013. - Aleksandar Nanevski, Greg Morrisett, and Lars Birkedal. Hoare type theory, polymorphism and separation.
*JFP*, 18(5-6):865–911, 2008. - Aleksandar Nanevski, Viktor Vafeiadis, and Josh Berdine. Structuring the verification of heap-manipulating programs. In
*POPL*, 2010. - Peter W. O’Hearn. Resources, concurrency and local reasoning.
*Theoretical Computer Science*, 375(1–3):271–307, May 2007. - John H. Reppy.
*Higher-order concurrency*. PhD thesis, Cornell University, 1992. - Davide Sangiorgi, Naoki Kobayashi, and Eijiro Sumii. Environmental bisimulations for higher-order languages. In
*LICS*, 2007. - Eijiro Sumii. A complete characterization of observational equivalence in polymorphic lambda-calculus with general references. In
*CSL*, 2009. - Kasper Svendsen, Lars Birkedal, and Matthew Parkinson. Modular reasoning about separation of concurrent data structures. In
*ESOP*, 2013. - P. W. Trinder, K. Hammond, H.-W. Loidl, and S. L. Peyton Jones. Algorithm + strategy = parallelism.
*JFP*, 8(1):23–60, January 1998. - A Turon, J Thamsborg, A Ahmed, L Birkedal, and D Dreyer. Logical relations for finegrained concurrency. In
*POPL*, 2013. - Aaron Turon. Reagents: expressing and composing fine-grained concurrency. In
*PLDI*, 2012. - Aaron Turon and Claudio Russo. Scalable join patterns. In
*OOPSLA*, 2011. - Viktor Vafeiadis.
*Modular fine-grained concurrency verification*. PhD thesis, University of Cambridge, 2008.

**Summary text license**

Creative Commons BY 3.0 Unported license

Lars Birkedal, Derek Dreyer, Philippa Gardner, and Zhong Shao

## Classification

- Programming Languages / Compiler
- Semantics / Formal Methods
- Verification / Logic

## Keywords

- Concurrency
- Compositional verification
- Mutable state
- Program logics
- Semantics
- Type systems
- Weak memory models
- Higher-order programming