TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 16111

Rethinking Experimental Methods in Computing

( Mar 13 – Mar 18, 2016 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/16111

Organizers

Contact


Motivation

The pervasive application of computer programs in our modern society is raising fundamental questions about how software should be evaluated. Many communities in computer science and engineering rely on extensive experimental investigations to validate and gain insights on properties of algorithms, programs, or entire software suites spanning several layers of complex code. However, as a discipline in its infancy, computer science still lags behind other long-standing fields such as natural sciences, which have been relying on the scientific method for centuries.

There are several threats and pitfalls in conducting rigorous experimental studies that are specific to computing disciplines. For example, experiments are often hard to repeat because code has not been released, it relies on stacks of proprietary or legacy software, or the computer architecture on which the original experiments were conducted is outdated. Moreover, the influence of side-effects stemming from hardware architectural features are often much higher than anticipated by the people conducting the experiments. The rise of multi-core architectures and large-scale computing infrastructures, and the ever growing adoption of concurrent and parallel programming models have made reproducibility issues even more critical. Another major problem is that many experimental works are poorly performed, making it difficult to draw any informative conclusions, misdirecting research, and curtailing creativity.

Surprisingly, in spite of all the common issues, there has been little or no cooperation on experimental methodologies between different computer science communities, who know very little of each others efforts. The goal of this seminar is to build stronger links and collaborations between computer science subcommunities around the pivotal concept of experimental analysis of software. Also, the seminar should exchange between communities their different views on experiments. The main target communities of this seminar will be algorithm engineering, programming languages, and software engineering, but we will also invite people from other communities to share their experiences. Our overall goal is to come up with a common foundation on how to evaluate software in general, and how to reproduce results. Since computer science is a leap behind natural sciences when it comes to experiments, the ultimate goal of the seminar is to make a step forward towards reducing this gap.


Summary

This seminar is dedicated to the memory of our co-organiser and friend David Stifler Johnson, who played a major role in fostering a culture of experimental evaluation in computing and believed in the mission of this seminar. He will be deeply missed.

The pervasive application of computer programs in our modern society is raising fundamental questions about how software should be evaluated. Many communities in computer science and engineering rely on extensive experimental investigations to validate and gain insights on properties of algorithms, programs, or entire software suites spanning several layers of complex code. However, as a discipline in its infancy, computer science still lags behind other long-standing fields such as natural sciences, which have been relying on the scientific method for centuries.

There are several threats and pitfalls in conducting rigorous experimental studies that are specific to computing disciplines. For example, experiments are often hard to repeat because code has not been released, it relies on stacks of proprietary or legacy software, or the computer architecture on which the original experiments were conducted is outdated. Moreover, the influence of side-effects stemming from hardware architectural features are often much higher than anticipated by the people conducting the experiments. The rise of multi-core architectures and large-scale computing infrastructures, and the ever growing adoption of concurrent and parallel programming models have made reproducibility issues even more critical. Another major problem is that many experimental works are poorly performed, making it difficult to draw any informative conclusions, misdirecting research, and curtailing creativity.

Surprisingly, in spite of all the common issues, there has been little or no cooperation on experimental methodologies between different computer science communities, who know very little of each others efforts. The goal of this seminar was to build stronger links and collaborations between computer science sub-communities around the pivotal concept of experimental analysis of software. Also, the seminar allowed exchange between communities their different views on experiments. The main target communities of this seminar were algorithm engineering, programming languages, operations research, and software engineering, but also people from other communities were invited to share their experiences. Our overall goal was to come up with a common foundation on how to evaluate software in general, and how to reproduce results. Since computer science is a leap behind natural sciences when it comes to experiments, the ultimate goal of the seminar was to make a step forward towards reducing this gap. The format of the seminar alternated talks intended for a broad audience, discussion panels, and working sessions in groups.

The organisers would like to thank the Dagstuhl team and all the participants for making the seminar a success. A warm acknowledgement goes to Amer Diwan, Sebastian Fischmeister, Catherine McGeoch, Matthias Hauswirth, Peter Sweeney, and Dorothea Wagner for their constant support and enthusiasm.

Copyright Daniel Delling, Camil Demetrescu, David S. Johnson, and Jan Vitek

Participants
  • Umut A. Acar (Carnegie Mellon University - Pittsburgh, US) [dblp]
  • José Nelson Amaral (University of Alberta - Edmonton, CA) [dblp]
  • David A. Bader (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Judith Bishop (Microsoft Research - Redmond, US) [dblp]
  • Ronald F. Boisvert (NIST - Gaithersburg, US) [dblp]
  • Marco Chiarandini (University of Southern Denmark - Odense, DK) [dblp]
  • Markus Chimani (Universität Osnabrück, DE) [dblp]
  • Emilio Coppa (Sapienza University of Rome, IT) [dblp]
  • Daniel Delling (Apple Inc. - Cupertino, US) [dblp]
  • Camil Demetrescu (Sapienza University of Rome, IT) [dblp]
  • Amer Diwan (Google - San Francisco, US) [dblp]
  • Dmitry Duplyakin (University of Utah - Salt Lake City, US) [dblp]
  • Eric Eide (University of Utah - Salt Lake City, US) [dblp]
  • Erik Ernst (Google - Aarhus, DK) [dblp]
  • Sebastian Fischmeister (University of Waterloo, CA) [dblp]
  • Norbert Fuhr (Universität Duisburg-Essen, DE) [dblp]
  • Paolo G. Giarrusso (Universität Tübingen, DE) [dblp]
  • Andrew V. Goldberg (Amazon.com, Inc. - Palo Alto, US) [dblp]
  • Matthias Hagen (Bauhaus-Universität Weimar, DE) [dblp]
  • Matthias Hauswirth (University of Lugano, CH) [dblp]
  • Benjamin Hiller (Konrad-Zuse-Zentrum - Berlin, DE) [dblp]
  • Richard Jones (University of Kent - Canterbury, GB) [dblp]
  • Tomas Kalibera (Northeastern University - Boston, US) [dblp]
  • Marco Lübbecke (RWTH Aachen, DE) [dblp]
  • Catherine C. McGeoch (Amherst College, US) [dblp]
  • Kurt Mehlhorn (MPI für Informatik - Saarbrücken, DE) [dblp]
  • J. Eliot B. Moss (University of Massachusetts - Amherst, US) [dblp]
  • Ian Munro (University of Waterloo, CA) [dblp]
  • Petra Mutzel (TU Dortmund, DE) [dblp]
  • Luís Paquete (University of Coimbra, PT) [dblp]
  • Mauricio Resende (Amazon.com, Inc. - Seattle, US) [dblp]
  • Peter Sanders (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Nodari Sitchinava (University of Hawaii at Manoa - Honolulu, US) [dblp]
  • Peter F. Sweeney (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
  • Walter F. Tichy (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Petr Tuma (Charles University - Prague, CZ) [dblp]
  • Dorothea Wagner (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Roger Wattenhofer (ETH Zürich, CH) [dblp]

Classification
  • data structures / algorithms / complexity
  • programming languages / compiler
  • software engineering

Keywords
  • algorithms
  • software
  • experiments
  • repeatability
  • reproducibility
  • benchmarks
  • data sets