Aims & Scope : Druckversion


24.01.16 - 29.01.16, Seminar 16041

Reproducibility of Data-Oriented Experiments in e-Science

The following text appeared on our web pages prior to the seminar, and was included as part of the invitation.

Motivation

In many subfields of computer science (CS), experiments play an important role. Besides theoretic properties of algorithms or methods, their effectiveness and performance often can only be validated via experimentation. In most of these cases, the experimental results depend on the input data, settings for input parameters, and potentially on characteristics of the computational environment where the experiments were designed and run. Unfortunately, most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available.

This has serious implications. Scientific discoveries do not happen in isolation. Important advances are often the result of sequences of smaller, less significant steps. In the absence of results that are fully documented, reproducible, and generalizable, it becomes hard to re-use and extend these results. Besides hindering the ability of others to leverage our work, and consequently limiting the impact of our field, the absence of reproducibility experiments also puts our reputation at stake, since reliability and validity of empiric results are basic scientific principles.

Reproducible results are not just beneficial to others in fact, they bring many direct benefits to the researchers themselves. Making an experiment reproducible forces the researcher to document execution pathways. This in turn enables the pathways to be analyzed (and audited) and it also helps newcomers (e.g., new students and post-docs) to get acquainted with the problem and tools used. Reproducibility also forces portability which simplifies the dissemination of the results. Last, but not least, preliminary evidence exists that reproducibility increases impact, visibility and research quality.

However, attaining reproducibility for computational experiments is challenging. It is hard both for authors to derive a compendium that encapsulates all the components (e.g., data, code, parameter settings, environment) needed to reproduce a result, and for reviewers to verify the results. There are also other barriers, from practical issues including the use of proprietary data, software and specialized hardware, to social for example, the lack of incentive for authors to spend the extra time making their experiments reproducible.

In this seminar, we will bring together experts from various sub-fields of Computer Science (CS) to create a joint understanding of the problems of reproducibility of experiments, discuss existing solutions and impediments, and propose ways to overcome current limitations. Some topics we intend to cover include, but are not limited to: reproducibility requirements, infrastructure, new CS research directions required to support reproducibility, incentives, education. Each participant is expected to present the state of the art, requirements, and issues related to reproducibility in their sub-field. Workgroups will be formed to discuss cross-cutting issues.

The expected outcome of the seminar will be a manifesto proposing guidelines, procedures, and further activities for improving reproducibility and broadening its adoption in CS.

More specifically, we will address the following key issues:

  • Understanding and requirements
  • Technical aspects of repeatability
  • Benchmarks and worksets
  • IPR, public availability of research, non-consumptive research
  • Infrastructures
  • New challenges and research directions
  • Specialization and integration across disciplines
  • Awareness, education and communication
  • Incentives for repeatability

Aims & Scope : Last Update 25.09.2017, 00:59 o'clock