Dagstuhl Seminar 24462
Research Infrastructures and Tools for Collaborative Networked Systems Research
( Nov 10 – Nov 13, 2024 )
Permalink
Organizers
- Georg Carle (TU München - Garching, DE)
- Serge Fdida (Sorbonne University - Paris, FR)
- Kate Keahey (Argonne National Laboratory, US)
- Henning Schulzrinne (Columbia University - New York, US)
Contact
- Andreas Dolzmann (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
Research infrastructures should evolve towards advanced scientific instruments that offer a vital insight to the underlying information in improvising the understanding of science methodologies and practices as they reliably and precisely help the scientist to measure the subject of their investigations. The Dagstuhl Seminar participants strongly agreed that large-scale research infrastructures are essential for providing scientists with access to specialized, advanced resources enabling cutting-edge experiments. As a result, the following key conclusions were drawn:
- Strategic Investment & Community Engagement: Research infrastructures represent a vital and long-term investment that demands active participation from research communities, sustained human capital development, and financial sustainability.
- Open Access & Data Sharing: While open access to shared physical infrastructure is essential, access to open research data is equally critical. Digital sharing of scientific results accelerates innovation, enhances reproducibility, and strengthens FAIR (Findable, Accessible, Interoperable, and Reusable) data sharing through metaservices.
- Amplified Impact & Network Effects: Research infrastructures inherently complement and amplify each other, creating a synergistic network effect. This interconnectedness fosters a more rigorous scientific approach and methodology, driving greater collaboration and knowledge advancement.
The results of the seminar include the following key recommendations:
Key Recommendations
- Define clear scientific objectives: Research infrastructures must explicitly articulate their scientific goals and establish a well-defined set of research questions to address.
- Foster a strong scientific community: The success of research infrastructures depends on strong community engagement. Support measures are essential to strengthen and sustain the scientific community. The effort of their support teams should be better recognized.
- Implement EasyFAIR principles: Adopting an EasyFAIR framework – offering comprehensive and automated support for researchers – is crucial to ensuring and leveraging the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) of digital assets. Additionally, open research data and reproducibility should be mandated by funding agencies and scientific societies. Scientists making an effort to share their research data should be rewarded.
- Enhance reproducibility: Reproducibility is a critical priority. Concrete methodologies must be established to ensure comparability of experimental results across different research infrastructures.
- Multi-year investment strategy: Research infrastructures should be properly articulated, designed and supported according to a longer-term roadmap and a sustained investment strategy.
- Establish common abstractions: Standardized models should be widely adopted for describing experiments and associated frameworks, including information models, data models, and ontologies.
- Improve findability and accessibility: The discovery and accessibility of research infrastructures and testbeds should be enhanced through comprehensive catalogs detailing available hardware and functionalities. It is also essential to assess how new and planned infrastructures contribute to scientific diversity.
- Define standardized evaluation criteria: A clear and well-defined set of evaluation criteria is necessary to assess the relevance and impact of research infrastructures. The outcomes of the “Testbed Evaluation” World Cafe (cf. Section 7 of the full report) provide a valuable basis for these criteria. Standardized assessment frameworks should be established for different categories of testbeds.
- Optimize user experience: Usability for researchers must be a priority. An innovative metric, Time to First Experiment (TTFE), can be used to measure the efficiency of infrastructures in enabling rapid experimentation and adoption. Education and training should be a strong component of research infrastructures.
- Ensure interoperability and openness: Strong support for interoperability between testbeds is crucial as well as using open components as often as possible, as it supports the ability to easily port experiments across different infrastructures.
- Promote flexibility and adaptability: Provisioning of ready-to-use experimental platforms: Instead of providing merely the fundamental resources of an experiment, the testbed should provide a experimental templates that researchers can use to perform their own research. This experimental template or “blueprint” can be used by researchers to answer specific research questions. These blueprints are not static but melleable, i.e., researchers are encouraged to adapt and extend them to fit their needs. The concept of malleability includes facilitating the modification of software artifacts and support of composability.
- Support sustainable development goals (SDGs): Large-scale research infrastructures contribute directly to the SDGs by optimizing the efficiency of hardware resource usage and improving workflows from experiment design to result dissemination. Additionally, insights gained from research findings can enhance global technical infrastructure, further supporting SDG objectives.

Experimental research on networked systems requires a suitable research infrastructure to perform experiments. For many years, the dominating approach of research groups was to create a specific experimental setup tailored to the needs of specific experiments, e.g., in the context of a PhD thesis. The community is well aware of the obvious shortcomings of this approach. One shortcoming is that while the time and effort to set up the needed infrastructure is high, the fact that the setups are created independently not only means unnecessary duplication of work, but also heterogeneity, with details frequently not documented in publications, that may lead to difficulties reproducing experiments of other scientists.
The networked systems community built large testbed research infrastructures, which contribute to overcome such challenges, and which provide valuable resources for experimental research of networked systems. However, there appears to be a gap between what the community that operates research infrastructures provides as services, and what scientists performing cuttingedge experimental research actually need. One area of possible improvement is experiment control. So far, scientists who use a large testbed for specific experiments need to solve how to orchestrate their experiments, how to collect, process, and store the data produced by the experiment, and how to add metadata to support other scientists. Consequently, there is a high heterogeneity with respect to the artifacts of specific experiments. In the typical workflow that is required when a paper with artifacts (software and data with metadata) is to be evaluated by an artifact evaluation committee, this heterogeneity and lack of generally available tools for the experiment workflow lead to a high effort for those who prepare the artifacts, for those who review the artifacts (as part of the evaluation committee work), and also for those who want to make use of the artifacts, e.g. when comparing own artifacts with previously published artifacts.
This Dagstuhl Seminar aims to find ways to increase the value of these research infrastructures, by identifying improved ways for collaboration by using testbeds and software tools for experiment workflows, aiming for experiments and their evaluation software to become easy to modify, extend, combine, and port to other testbed environments. By identifying improved frameworks for collaboration, with tools in support of experiment workflows, the seminar aims to support the research community in increasing quality, reproducibility and reuse of research results, and to identify innovative toolchains that increase the productivity of scientists and allow for a more efficient use of research infrastructure resources.
In the seminar, we want to identify approaches that combine a set of measures suitable to overcome the identified problems, by addressing all phases of experiment design, execution, data and metadata generation, processing, evaluation, and publication.

Please log in to DOOR to see more details.
- Tom Barbette (UC Louvain, BE) [dblp]
- Terry Benzel (USC - Marina del Rey, US) [dblp]
- Georg Carle (TU München - Garching, DE) [dblp]
- Hakima Chaouchi (IMT - Palaiseau, FR) [dblp]
- Walid Dabbous (INRIA - Sophia Antipolis, FR) [dblp]
- Yuri Demchenko (University of Amsterdam, NL) [dblp]
- Serge Fdida (Sorbonne University - Paris, FR) [dblp]
- Sebastian Gallenmüller (TU München - Garching, DE) [dblp]
- Jorge Gasos (European Commission - Brussels, BE)
- Michael Goedicke (Universität Duisburg - Essen, DE) [dblp]
- Cheikh Ahmadou Bamba Gueye (Université Cheikh Anta Diop de Dakar, SN) [dblp]
- Tobias Hoßfeld (Universität Würzburg, DE) [dblp]
- Kate Keahey (Argonne National Laboratory, US) [dblp]
- Wolfgang Kellerer (TU München, DE) [dblp]
- Raymond Knopp (EURECOM - Biot, FR) [dblp]
- Jim Kurose (University of Massachusetts Amherst, US) [dblp]
- Deep Medhi (NSF - Alexandria, US) [dblp]
- Jelena Mirkovic (USC - Marina del Rey, US) [dblp]
- Andrew W. Moore (University of Cambridge, GB) [dblp]
- Paul Michael Ruth (RENCI - Chapel Hill, US)
- Damien Saucez (INRIA - Sophia Antipolis, FR) [dblp]
- Björn Scheuermann (TU Darmstadt, DE) [dblp]
- Henning Schulzrinne (Columbia University - New York, US) [dblp]
- Jörg Widmer (IMDEA Networks Institute - Madrid, ES) [dblp]
- Walter Willinger (Niksun - Princeton, US) [dblp]
- Adam Wolisz (TU Berlin, DE) [dblp]
- Ellen Zegura (Georgia Tech - Atlanta US & NSF - Alexandria, US) [dblp]
- Martina Zitterbart (KIT - Karlsruher Institut für Technologie, DE) [dblp]
Classification
- Networking and Internet Architecture
Keywords
- Research Infrastructures
- Reproducibility
- FAIR: Findability
- Accessibility
- Interoperability
- and Reuse of digital assets
- Optimizing reuse of data
- infrastructure usage and sharing