https://www.dagstuhl.de/17352
August 27 – 30 , 2017, Dagstuhl Seminar 17352
Analysis and Synthesis of Floating-point Programs
Organizers
Eva Darulova (MPI-SWS – Saarbrücken, DE)
Alastair F. Donaldson (Imperial College London, GB)
Zvonimir Rakamaric (University of Utah – Salt Lake City, US)
Cindy Rubio-Gonzalez (University of California – Davis, US)
For support, please contact
Documents
Dagstuhl Report, Volume 7, Issue 8
List of Participants
Dagstuhl's Impact: Documents available
Dagstuhl Seminar Schedule [pdf]
Press Room
- Floating-Point Workshop
Blog post by George Constantinides, posted in his Thinking blog September 1st, 2017.
Motivation
Nowadays, floating-point computation is supported on a wide range of computing platforms, ranging from smartphones and embedded devices all the way to supercomputers. However, writing correct, yet high-performance and energy-efficient flfloating-point software is a challenging and complex task that often requires high levels of expertise. This is in large part due to the non-intuitive nature of floatingpoint arithmetic: roundoff error means that many algebraic laws of the real numbers do not hold in the flfloating-point setting. There is a dire need to empower developers with automated techniques and tools for analysis and synthesis of floating-point programs.
Research related to floating-point computation spans a multitude of areas of computer science, ranging from hardware design and architecture, all the way to high-performance computing, machine learning, and software analysis and verification. The focus of this Dagstuhl Seminar is to connect researchers from these areas, with the goals of fostering collaboration, and brainstorming new theoretical advances, practical techniques and tools to improve the reliability and performance of floating-point computation.
There has been a lot of recent interest in floating-point optimization and reasoning, including expression rewriting and instruction generation to improve precision; abstract interpretation, automated theorem proving, SMT solving and global optimization for rigorous reasoning; and integration of lower-precision computations into floating-point programs to conserve energy while achieving acceptably stable results (important in high-performance computing and machine learning).
While these above approaches and techniques have certainly advanced the current state-of-the-art, they still suffer from various drawbacks, such as limited automation, precision, or scalability, which prevents them from being transferred into practical solutions. Much more work is needed, including breakthroughs that include completely novel techniques as well as novel combinations of existing methods.
The goal of this Dagstuhl Seminar is for the attendees to present their current research in the area of analysis, synthesis, and implementation of floating-point computations to their colleagues, many of whom will come from other areas of computer science. This will foster subsequent discussions and brainstorming sessions, with the ultimate goal in mind for ideas to ow from one area into another, for new collaborations to be established, and to lay groundwork for novel research directions.
Motivation text license Creative Commons BY 3.0 DE
Eva Darulova, Alastair F. Donaldson, Zvonimir Rakamaric, and Cindy Rubio-Gonzalez
Classification
- Programming Languages / Compiler
- Semantics / Formal Methods
- Verification / Logic
Keywords
- Floating-point arithmetic
- Program optimization
- Rigorous compilation
- Precision allocation
- Energy-efficient Computing