Software invades everywhere in our society and life, and we are increasingly dependent on it. This applies to all kinds of software: software in safety critical systems such as airplanes, in consumer products, in mobile phones, in telecom switches, in pace makers, in process control systems, in financial systems, in administration systems, etc. Consequently, the quality of software is an issue of increasing importance and growing concern.
Systematic testing is one of the most important and widely used techniques to check the quality of software. Testing, however, is often a manual and laborious process without effective automation, which makes it error-prone, time consuming, and very costly. Estimates are that testing consumes between 30-50% of the total software development costs. Moreover, the majority of the testing activities take place in the most critical phase of software development, viz. at the end of the project just before software delivery.
The tendency is that the effort spent on testing is still increasing due to the continuing quest for better software quality, and the ever growing size and complexity of systems. The situation is aggravated by the fact that the complexity of testing tends to grow faster than the complexity of the systems being tested, in the worst case even exponentially. Whereas development and construction methods for software allow the building of ever larger and more complex systems, there is a real danger that testing methods cannot keep pace with construction, so that these new systems cannot sufficiently fast and thoroughly be tested anymore. This may seriously hamper the development of future generations of software systems.
Model based testing
One of the new technologies to meet the challenges imposed on software testing is model-based testing . In model-based testing a model of the desired behaviour of the system under test (SUT) is the starting point for testing. Model-based testing has recently gained attention with the popularization of modeling itself both in academia and in industry. The main virtue of model-based testing is that it allows test automation that goes well beyond the mere automatic execution of manually crafted test cases. It allows for the algorithmic generation of large amounts of test cases, including test oracles, completely automatically from the model of required behaviour. If this model is valid, i.e. expresses precisely what the system under test should do, all these tests are also provably valid. Moreover, these models can, in principle, also be used for defining e.g. specification coverage metrics and test selection with mathematical rigour, so that quantifiable confi- dence is obtained, that a product faithfully conforms to its specification.
From an industrial perspective, model-based testing is a promising technique to improve the quality and effectiveness of testing, and to reduce its cost. The current state of practice is that test automation mainly concentrates on the automatic execution of tests. For this, a multitude of commercial test execution tools is available, but these tools do not address the problem of test generation. Model-based testing aims at automatically generating highquality test suites from models, thus complementing automatic test execution.
From an academic perspective, model-based testing is a natural extension of formal methods and verification techniques, where many of the formal techniques can be reused. Formal verification and model-based testing serve complementary goals. Formal verification intends to show that a system has some desired properties by proving that a model of that system satisfies these properties. Thus, any verification is only as good as the validity of the model on which it is based. Model-based testing starts with a (verified) model, and then intends to show that the real, physical implementation of the system behaves in compliance with this model. Due to the inherent limitations of testing, such as the limited number of tests that can be performed, testing can never be complete: testing can only show the presence of errors, not their absence.
The interest in model-based testing from both industry and academia provides perspectives for academic-industrial cooperation in this area. This is also reflected in the relatively high industrial participation in the seminar, with researchers from Siemens, DaimlerChrysler, IBM, France Telecom, and Microsoft attending, and even co-organizing.
The aim of the seminar Perspectives of Model-Based Testing was to bring together researchers and practitioners from industry and academia to discuss the state of the art in theory, methods, tools, applications, and industrialization of model-based testing, and to identify the important open issues and challenges.
The presentations at the seminar gave a good insight into what has been achieved in the area of model-based testing, and, even more important, they gave clear indications of what has to be done before we can expect widespread industrial use of modelbased testing.
Compared with the 1998-seminar, the area of model-based testing has certainly matured, with expanding interest also from industry. The feasibility of model based testing has been shown, more groups are involved, more theories are supported by tools, some of them close to industrially strength tools, and there are successful applications. Software testing has become a respectable research area.
The prospects for model-based testing to improve the quality and to reduce the cost of software testing are positive, but still more effort is needed, both in developing new theories and in making the existing methods and theories applicable, e.g., by providing better tool support.
The general opinion was that seminar was successful, and it was felt that, apart from the usual publication and presentation fora (e.g., FATES, ISSTA, MBT, TACAS, CAV, Test- Com,...) another Dagstuhl meeting on model-based testing should be organized in the future.
- Axel Belinfante (University of Twente, NL) [dblp]
- Henrik Bohnenkamp (University of Twente, NL)
- Laura Brandán Briones (University of Twente, NL)
- Ed Brinksma (Embedded Systems Institute - Eindhoven, NL)
- Simon Burton (Daimler AG - Sindelfingen, DE) [dblp]
- Colin Campbell (Microsoft Research - Redmond, US)
- Mirko Conrad (Daimler Research - Berlin, DE) [dblp]
- René de Vries (Radboud University Nijmegen, NL)
- Winfried Dulz (Universität Erlangen-Nürnberg, DE)
- Bernd Finkbeiner (Universität des Saarlandes, DE) [dblp]
- Lars Frantzen (Radboud University Nijmegen, NL)
- Marie-Claude Gaudel (Université Paris Sud, FR)
- Wolfgang Grieskamp (Microsoft Research - Redmond, US) [dblp]
- Yuri Gurevich (Microsoft Research - Redmond, US) [dblp]
- Alan Hartman (IBM - Haifa, IL) [dblp]
- Jiale Huo (McGill University, CA)
- Sarfraz Khurshid (University of Texas - Austin, US) [dblp]
- Pieter Koopman (Radboud University Nijmegen, NL)
- Victor Kuliamin (Academy of Sciences - Moscow, RU)
- Mass Soldal Lund (University of Oslo, NO) [dblp]
- Brian Nielsen (Aalborg University, DK) [dblp]
- Doron A. Peled (University of Warwick - Coventry, GB) [dblp]
- Alexandre Petrenko (CRIM - Montreal, CA)
- Stacy Prowell (University of Tennessee, US) [dblp]
- Yves-Marie Quemener (France Télécom R&D - Lanion, FR)
- John Rushby (SRI - Menlo Park, US) [dblp]
- Vlad Rusu (CAPS entreprise - Rennes, FR)
- Holger Schlingloff (Fraunhofer Institut - Berlin, DE) [dblp]
- Dirk Seifert (TU Berlin, DE)
- Dehla Sokenou (TU Berlin, DE)
- Marielle Stoelinga (University of Twente, NL) [dblp]
- Nikolai Tillmann (Microsoft Research - Redmond, US) [dblp]
- Jan Tretmans (Radboud University Nijmegen, NL) [dblp]
- Aliki Tsiolakis (Universität Bremen, DE)
- Andreas Ulrich (Siemens AG - München, DE)
- Machiel van der Bijl (University of Twente, NL)
- Margus Veanes (Microsoft Research - Redmond, US) [dblp]
- Burkhart Wolff (ETH Zürich, CH) [dblp]
- Dagstuhl Seminar 10421: Model-Based Testing in Practice (2010-10-17 - 2010-10-22) (Details)