01. – 04. November 2015, Dagstuhl Perspectives Workshop 15452
Artifact Evaluation for Publications
1 / 3 >
Auskunft zu diesem Dagstuhl Perspectives Workshop erteilt
Computer systems researchers have developed numerous artifacts that encompass a broad collection of software tools, benchmarks, and data sets. These artifacts are used to prototype innovations, evaluate trade-offs and analyze implications. Unfortunately, methods used in the evaluation of computing system innovation are often at odds with sound science and engineering practice. The ever-increasing pressure to publish more and more results poses an impediment to accountability, which is a key component of the scientific and engineering process. Experimental results are not usually disseminated with sufficient metadata (i.e., software extensions, data sets, benchmarks, test cases, scripts, parameters, etc.) to achieve repeatability and/or reproducibility. Without this information, issues surrounding trust, fairness and building on and comparing with previous ideas becomes problematic. Efforts in various computer systems research sub-communities, including programming languages/compilers, computer architecture, and high-performance computing, are underway to address the challenge.
This Dagstuhl Perspectives Workshop (PW) brought together stakeholders of associated CSR sub-communities to determine synergies and to identify the most promising directions and mechanisms to push the broader community toward accountability. The PW assessed current efforts, shared what does and doesn't work, identified additional processes, and determined possible incentives and mechanisms. The outcomes from the workshop, including recommendations to catalyze the community, are separately documented in an associated Dagstuhl Manifesto.
Creative Commons BY 3.0 Unported license
Bruce R. Childers and Grigori Fursin and Shriram Krishnamurthi and Andreas Zeller
- Optimization / Scheduling
- Software Engineering
- Empirical Evaluation of Software Tools
- Documentation of Research Processes
- Artifact Evaluation
- Experimental Reproducibility