https://www.dagstuhl.de/04371

05. – 10. September 2004, Dagstuhl-Seminar 04371

Perspectives of Model-Based Testing

Organisatoren

Ed Brinksma (Embedded Systems Institute – Eindhoven, NL)
Wolfgang Grieskamp (Microsoft Research – Redmond, US)
Jan Tretmans (Radboud University Nijmegen, NL)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team

Dokumente

Dagstuhl Seminar Proceedings DROPS
Teilnehmerliste

Summary

Software testing

Software invades everywhere in our society and life, and we are increasingly dependent on it. This applies to all kinds of software: software in safety critical systems such as airplanes, in consumer products, in mobile phones, in telecom switches, in pace makers, in process control systems, in financial systems, in administration systems, etc. Consequently, the quality of software is an issue of increasing importance and growing concern.

Systematic testing is one of the most important and widely used techniques to check the quality of software. Testing, however, is often a manual and laborious process without effective automation, which makes it error-prone, time consuming, and very costly. Estimates are that testing consumes between 30-50% of the total software development costs. Moreover, the majority of the testing activities take place in the most critical phase of software development, viz. at the end of the project just before software delivery.

The tendency is that the effort spent on testing is still increasing due to the continuing quest for better software quality, and the ever growing size and complexity of systems. The situation is aggravated by the fact that the complexity of testing tends to grow faster than the complexity of the systems being tested, in the worst case even exponentially. Whereas development and construction methods for software allow the building of ever larger and more complex systems, there is a real danger that testing methods cannot keep pace with construction, so that these new systems cannot sufficiently fast and thoroughly be tested anymore. This may seriously hamper the development of future generations of software systems.

Model based testing

One of the new technologies to meet the challenges imposed on software testing is model-based testing . In model-based testing a model of the desired behaviour of the system under test (SUT) is the starting point for testing. Model-based testing has recently gained attention with the popularization of modeling itself both in academia and in industry. The main virtue of model-based testing is that it allows test automation that goes well beyond the mere automatic execution of manually crafted test cases. It allows for the algorithmic generation of large amounts of test cases, including test oracles, completely automatically from the model of required behaviour. If this model is valid, i.e. expresses precisely what the system under test should do, all these tests are also provably valid. Moreover, these models can, in principle, also be used for defining e.g. specification coverage metrics and test selection with mathematical rigour, so that quantifiable confi- dence is obtained, that a product faithfully conforms to its specification.

From an industrial perspective, model-based testing is a promising technique to improve the quality and effectiveness of testing, and to reduce its cost. The current state of practice is that test automation mainly concentrates on the automatic execution of tests. For this, a multitude of commercial test execution tools is available, but these tools do not address the problem of test generation. Model-based testing aims at automatically generating highquality test suites from models, thus complementing automatic test execution.

From an academic perspective, model-based testing is a natural extension of formal methods and verification techniques, where many of the formal techniques can be reused. Formal verification and model-based testing serve complementary goals. Formal verification intends to show that a system has some desired properties by proving that a model of that system satisfies these properties. Thus, any verification is only as good as the validity of the model on which it is based. Model-based testing starts with a (verified) model, and then intends to show that the real, physical implementation of the system behaves in compliance with this model. Due to the inherent limitations of testing, such as the limited number of tests that can be performed, testing can never be complete: testing can only show the presence of errors, not their absence.

The interest in model-based testing from both industry and academia provides perspectives for academic-industrial cooperation in this area. This is also reflected in the relatively high industrial participation in the seminar, with researchers from Siemens, DaimlerChrysler, IBM, France Telecom, and Microsoft attending, and even co-organizing.

The Seminar

The aim of the seminar Perspectives of Model-Based Testing was to bring together researchers and practitioners from industry and academia to discuss the state of the art in theory, methods, tools, applications, and industrialization of model-based testing, and to identify the important open issues and challenges.

Conclusion

The presentations at the seminar gave a good insight into what has been achieved in the area of model-based testing, and, even more important, they gave clear indications of what has to be done before we can expect widespread industrial use of modelbased testing.

Compared with the 1998-seminar, the area of model-based testing has certainly matured, with expanding interest also from industry. The feasibility of model based testing has been shown, more groups are involved, more theories are supported by tools, some of them close to industrially strength tools, and there are successful applications. Software testing has become a respectable research area.

The prospects for model-based testing to improve the quality and to reduce the cost of software testing are positive, but still more effort is needed, both in developing new theories and in making the existing methods and theories applicable, e.g., by providing better tool support.

The general opinion was that seminar was successful, and it was felt that, apart from the usual publication and presentation fora (e.g., FATES, ISSTA, MBT, TACAS, CAV, Test- Com,...) another Dagstuhl meeting on model-based testing should be organized in the future.

Related Dagstuhl-Seminar

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.