TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Perspectives Workshop 15452

Artifact Evaluation for Publications

( Nov 01 – Nov 04, 2015 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/15452

Organizers

Contact


Motivaton

The computer systems research (CSR) community has developed numerous artifacts that encompass a rich and diverse collection of compilers, simulators, analyzers, benchmarks, data sets and other software and data. These artifacts are used to implement research innovations, evaluate trade-offs and analyze implications. Unfortunately, the evaluation methods used for computing systems innovation can be at odds with sound science and engineering practice. In particular, ever-increasing competitiveness and expediency to publish more results poses an impediment to accountability, which is key to the scientific and engineering process. Experimental results are not typically distributed with enough information for repeatability and/or reproducibility to enable comparisons and building on the innovation. Efforts in programming languages/compilers and software engineering, computer architecture, and high-performance computing are underway to address this challenge.

This Dagstuhl Perspectives Workshop brings together leaders of these efforts and senior stakeholders of CSR sub-communities to determine synergies and to identify the promising directions and mechanisms to move the broader community toward accountability. The workshop assesses current efforts, shares what does and doesn't work, identifies additional processes, incentives and mechanisms, and determines how to coordinate and sustain the efforts. The workshop's outcome is a roadmap of actionable strategies and steps to improving accountability, leveraging investment of multiple groups, educating the community on accountability, and sharing artifacts and experiments.


Summary

Computer systems researchers have developed numerous artifacts that encompass a broad collection of software tools, benchmarks, and data sets. These artifacts are used to prototype innovations, evaluate trade-offs and analyze implications. Unfortunately, methods used in the evaluation of computing system innovation are often at odds with sound science and engineering practice. The ever-increasing pressure to publish more and more results poses an impediment to accountability, which is a key component of the scientific and engineering process. Experimental results are not usually disseminated with sufficient metadata (i.e., software extensions, data sets, benchmarks, test cases, scripts, parameters, etc.) to achieve repeatability and/or reproducibility. Without this information, issues surrounding trust, fairness and building on and comparing with previous ideas becomes problematic. Efforts in various computer systems research sub-communities, including programming languages/compilers, computer architecture, and high-performance computing, are underway to address the challenge.

This Dagstuhl Perspectives Workshop (PW) brought together stakeholders of associated CSR sub-communities to determine synergies and to identify the most promising directions and mechanisms to push the broader community toward accountability. The PW assessed current efforts, shared what does and doesn't work, identified additional processes, and determined possible incentives and mechanisms. The outcomes from the workshop, including recommendations to catalyze the community, are separately documented in an associated Dagstuhl Manifesto.

Copyright Bruce R. Childers, Grigori Fursin, Shriram Krishnamurthi, and Andreas Zeller

Participants
  • Bruce R. Childers (University of Pittsburgh, US) [dblp]
  • Neil Chue Hong (Software Sustainability Institute - Edinburgh, GB) [dblp]
  • Tom Crick (Cardiff Metropolitan University, GB) [dblp]
  • Jack W. Davidson (University of Virginia - Charlottesville, US) [dblp]
  • Camil Demetrescu (Sapienza University of Rome, IT) [dblp]
  • Roberto Di Cosmo (University Paris-Diderot, FR) [dblp]
  • Jens Dittrich (Universität des Saarlandes, DE) [dblp]
  • Dror Feitelson (The Hebrew University of Jerusalem, IL) [dblp]
  • Sebastian Fischmeister (University of Waterloo, CA) [dblp]
  • Grigori Fursin (cTuning - Cachan, FR) [dblp]
  • Ashish Gehani (SRI - Menlo Park, US) [dblp]
  • Matthias Hauswirth (University of Lugano, CH) [dblp]
  • Marc Herbstritt (Schloss Dagstuhl, DE) [dblp]
  • David R. Kaeli (Northeastern University - Boston, US) [dblp]
  • Shriram Krishnamurthi (Brown University - Providence, US) [dblp]
  • Anton Lokhmotov (Dividiti Ltd. - Cambridge, GB) [dblp]
  • Martin Potthast (Bauhaus-Universität Weimar, DE) [dblp]
  • Lutz Prechelt (FU Berlin, DE) [dblp]
  • Petr Tuma (Charles University - Prague, CZ) [dblp]
  • Michael Wagner (Schloss Dagstuhl, DE) [dblp]
  • Andreas Zeller (Universität des Saarlandes, DE) [dblp]

Classification
  • hardware
  • optimization / scheduling
  • software engineering

Keywords
  • Empirical Evaluation of Software Tools
  • Documentation of Research Processes
  • Artifact Evaluation
  • Experimental Reproducibility