TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 17502

Testing and Verification of Compilers

( Dec 10 – Dec 13, 2017 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/17502

Organizers

Contact


Schedule

Motivation

Compilers are important because they are widely used in software development. Buggy compilers may lead to unintended behaviours of developed programs, which may in turn cause software failures or even disasters in safety-critical domains. Furthermore, compiler defects make software very hard to debug, because developers can scarcely determine whether a failure is caused by the software they are developing or by the compiler they are using. Therefore, guaranteeing the quality of compilers is critical.

Compiler testing plays a critical role in compiler quality assurance. There are three inherent factors influencing compiler testing: test inputs of compilers, test oracles, and the efficiency of compiler testing. Research on compiler testing has been focusing on these three factors. Furthermore, compiler verification is also an effective way to guarantee the quality of compilers. In particular, there has been tremendous progress on building verified compilers from scratch (a notable example is the CompCert project).

The aim of this Dagstuhl seminar is to bring together researchers and practitioners from various sub-disciplines of computer science to brainstorm and discuss on the topic of testing and verification of compilers. In particular, we propose the following five research questions to structure the seminar and drive discussion:

RQ1: To what extent do compiler defects pose a security and reliability threat? It is clear that compiler bugs can harm software reliability and pose a threat to cyber-security in principle. There is a lack of empirical evidence and investigative techniques to assess the extent to which this is a problem in practice.

RQ2: How can we prioritise the compiler defects detected by automated testing to optimise for developer time? It is time-consuming for compiler developers to investigate and fix bug reports, thus research is required to both to provide concise reports (so that issues are easy to investigate), and to prioritise reporting serious issues (minimising time spent understanding issues that are ultimately deemed to have very low priority).

RQ3: How do the myriad techniques for compiler testing compare in their efficiency and efficacy? Random differential testing, equivalence modulo inputs testing, bounded-exhaustive program generation and symbolic testing have all been applied with success to reveal defects in compilers, and variations on program generation to fuel these techniques include creating purely synthetic programs as well as the creation of programs from existing code fragments. Comparing these techniques in terms of the rate of detected defects, the seriousness of detected defects, and their complementarity, is an important open problem that the seminar will address.

RQ4: Is analysis of program analysers feasible more generally? Compilers are one example of a programming language processing tool. In general, there is a need for program analysis tools (including refactoring engines, static property checkers, and dynamic sanitisers) to be tested. We will discuss opportunities for generalising methods for compiler testing to apply in this broader context.

RQ5: Can the gap between compiler testing and compiler verification be bridged? The compiler testing community focuses on automated testing of industrial-strength optimising compilers, e.g., GCC and LLVM. In parallel, there has been tremendous progress on building verified compilers from scratch, e.g., CompCert. Researchers in these two communities have tended to operate separately, since the techniques required are rather different. Nevertheless, given the overarching aim for compiler reliability that drives both sorts of effort, we will explore new research directions for driving unreliable compilers towards a correct, and potentially verifiable, state, through automated testing.

Copyright Junjie Chen, Alastair F. Donaldson, Andreas Zeller, and Hongyu Zhang

Summary

This report documents the Dagstuhl Seminar 17502 "Testing and Verification of Compilers".

Compilers underpin all software development, but bugs in them can be particularly hard to notice if they result in "silent failure", where a program appears to work but is subtly miscompiled. Thus a compiled program may behave erroneously even when the source form of it appears entirely correct.

Despite the common wisdom that "it is never the compiler's fault", bugs in compilers are in fact relatively common, and finding them is a challenging and active area of research.

This seminar brought together researchers in that area with a broader group of researchers and practitioners in software testing and verification, and in compiler development itself, to share their experiences and discuss the open questions and challenges that the field presents. The goal was to brainstorm new ideas for how to approach these challenges, and to help foster longer-term collaborations between the participants.

The seminar involved a number of talks from participants about their particular areas of work and research, followed by working groups where various specific challenges were discussed. It then concluded with an open panel session on the challenges and concepts of compiler testing and verification.

This report presents the collection of abstracts associated with the participant presentations, followed by notes summarising each discussion session and the concluding panel, which we provide as a resource for researchers who are interested in understanding the state of the art and open problems in this field.

Copyright David R. MacIver

Participants
  • Edward E. Aftandilian (Google Research - Mountain View, US) [dblp]
  • Marcel Beemster (Solid Sands - Amsterdam, NL) [dblp]
  • Junjie Chen (Peking University, CN) [dblp]
  • Nathan Chong (ARM Ltd. - Cambridge, GB) [dblp]
  • Eric Eide (University of Utah - Salt Lake City, US) [dblp]
  • Hugues Evrard (Imperial College London, GB) [dblp]
  • Dan Hao (Peking University, CN) [dblp]
  • John Hughes (Chalmers University of Technology - Göteborg, SE) [dblp]
  • Dan Iorga (Imperial College London, GB) [dblp]
  • Julia Lawall (INRIA - Paris, FR) [dblp]
  • Daniel Lehmann (TU Darmstadt, DE) [dblp]
  • Thibaut Lutz (NVIDIA - Redmond, US) [dblp]
  • David MacIver (Imperial College London, GB)
  • Jessica Paquette (Apple Computer Inc. - Cupertino, US)
  • David J. Pearce (Victoria University - Wellington, NZ) [dblp]
  • Michael Pradel (TU Darmstadt, DE) [dblp]
  • John Regehr (University of Utah - Salt Lake City, US) [dblp]
  • Raimondas Sasnauskas (SES Engineering - Luxembourg, LU) [dblp]
  • Marija Selakovic (TU Darmstadt, DE) [dblp]
  • Gerson Sunyé (University of Nantes, FR) [dblp]
  • Nikolai Tillmann (Facebook - Seattle, US) [dblp]
  • Yingfei Xiong (Peking University, CN) [dblp]
  • Francesco Zappa Nardelli (INRIA - Paris, FR) [dblp]
  • Andreas Zeller (Universität des Saarlandes, DE) [dblp]
  • Hongyu Zhang (University of Newcastle, AU) [dblp]

Classification
  • programming languages / compiler
  • software engineering
  • verification / logic

Keywords
  • Compiler Testing
  • Compiler Verification
  • Program Analysis
  • Program Optimization
  • Code Generation