TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


GI-Dagstuhl Seminar 09492

Model-Driven Quality Prediction

( Nov 29 – Dec 02, 2009 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/09492

Organizers



Motivation

Early design time prediction of software architectures to determine extra-functional quality attributes like performance or reliability is subject of intense research. The motivation for early quality predictions is based on the assumption that corrective measures are easy to realise during the design phase with low costs. Furthermore, such predictions allow an engineering approach to software development: software architects create models of the system under development, analyse their extra-functional properties, redesign them if necessary, and only if the requirements are met, the system is implemented. Since models assume a central role in this process, it is usually called model-based software design in literature. Model-based software design also helps if there are multiple design alternatives which fulfil the requirements. In this case, systematic trade-off analyses become available in which software architects can, for example, maximize the utility-to-cost ratio in collaboration with the software's stake-holders.

Recent research enhances the former model-based methods (as surveyed for performance by Balsamo et al.) into model-driven quality prediction techniques. Model-driven software analysis and prediction tries to automate the process of analysing the information available in the model, thus replacing the manual analysis dominant in model-based analysis. To put model-driven software development into practice, the software architect first creates a (semi-)formal model of the system's architecture, including aspects of the system's static structure, dynamic behaviour, and deployment on hardware nodes. Afterwards, the software architect refines the model with quality annotations relevant to the quality attributes of interest. For example, for performance evaluation, the architect annotates steps in an activity diagram with the amount of CPU, hard disk, or network resources needed to perform individual actions. The following steps then execute automatically.

  • First, the created model (called the software design model) is transformed into a prediction model specific to the quality attribute under evaluation. For example, for performance the transformation can generate queueing network or layered queuing network models, stochastic timed Petri nets, or stochastic process algebras (SPAs).
  • Next, the resulting prediction model is solved by standard solvers relying on analytical or simulation-based methods. The solution contains values for metrics relevant to the respective analysis method. These have to be transformed back into annotations on the original software design model.
  • Finally, the software architect takes these values as foundation for design decisions.

While the whole process has demonstrated its applicability in several case studies and while the use of model-driven techniques made them much more efficient in contrast to model-based approaches, the process of creating a model still remains a time-consuming task. This is especially true where the annotated software design model is concerned - even if we assume that the design model itself has to be created in the common software development process anyway (as is the case for example in the Rational Unified Process), this model needs to be annotated with reliable estimates of the quality annotations. Examples of these annotations include resource demands, loop iteration counts, branching probabilities, or failure rates. Estimating values of even the simple quality annotations is a difficult task. The estimates become still more complex when the annotations support parametrisation as introduced by Koziolek. In these cases, the model explicitly address the fact that many annotations also depend on the data processed by the system (e.g. processing 1kB of data takes less time than processing 1MB). However, as the data is usually supplied by the system's users, it is unknown at the time when the specifiation of single components or subsystems is created, which would be desirable in order to build reusable models. Typically, estimates are also too difficulat to be made just by guessing using a rule of thumb strategy. In such cases, annotations are based on measurements taken on existing components of the system, such as pre-existing application components, middleware platforms, database systems, or operating system services. Taking these measurements usually involves deploying existing components in test environments and measuring them in prototypes, so that the software architect can take the collected measurements and convert them into the needed annotations for the software design model. Most parts of this process today need manual interaction.

As explained in the previous paragraph, measurements of existing components are hindered by the dependency of the measurements on the characteristic of the test environments. Obviously, a component that is measured in an isolated test environment performs differently from a component that is integrated in an application, where it can share resources with other components. Component benchmarking therefore aims to include the dependency of the annotations on the shared resources into a parametric expression of the measurement results. Coupled with a model of resource sharing, which can be built from the models of the application architecture and the application deployment, such annotations compensate for the dependency of the measurement results on the test environment. The complexity of creating the highly parameterised models obviously hinders a wide-spread application of model-driven prediction methods. Today, a lot of expertise and experience is required to design and conduct experiments. To lower the effort needed, we therefore also need model-driven methods that collect and interpret data and transform it into annotations to be added to the software design model.

Although several ideas exist in this area, there is still no approach that would be integrated into a model-driven tool chain. There are some approaches which tackle the issue of collecting performance model input from generated prototypes. For example, Woodside and Schramm use prototypes generated from Layered Queuing Networks (LQNs) to do so. Similar approaches based on different source models were presented by Zhou, Gorton, and Liu, Grundy, Cai et al., or Becker et al.. The question how to automatically interpret measurements to come to performance model inputs was recently tackled by Woodside using statical models or by Krogmann et al. using genetic algorithms. However, none of the mentioned approaches integrates explicitly in a model-driven performance prediction process. From the summarised state-of-the art, it can be concluded that there are two mostly disjunct communities working on predictable software development. One group is researching model-driven software analysis methods for quality attributes like performance and reliability. The other group focuses on benchmarking and measuring existing software systems or prototype implementations. In cases where the model creation should not be based on manual estimates, model-driven software analysis approaches try to use results from the measurement community. To foster research in both areas but especially in the intersection of these research topics, the proposed Dagstuhl seminar is expected to come up with a state-of-theart survey of both fields and their intersection. This survey should provide a solid foundation for further research in the area and tries to close the gap by identifying the missing elements.


Participants
  • Vlastimil Babka (Charles University - Prague, CZ)
  • Nick Baltas (Imperial College London, GB)
  • Steffen Becker (Universität Paderborn, DE) [dblp]
  • Fabian Brosig (KIT - Karlsruher Institut für Technologie, DE)
  • Radu Calinescu (University of York, GB) [dblp]
  • Aida Causevic (Mälardalen University - Västerås, SE)
  • Andreas Dittrich (HU Berlin, DE)
  • Mauro Luigi Drago (Polytechnic University of Milan, IT)
  • Antonio Filieri (Polytechnic University of Milan, IT) [dblp]
  • Tobias Goldschmidt (HU Berlin, DE)
  • Vincenzo Grassi (University of Rome "Tor Vergata", IT) [dblp]
  • Michael Hauck (FZI - Karlsruhe, DE)
  • Nikolaus Huber (KIT - Karlsruher Institut für Technologie, DE)
  • Peter Libic (Charles University - Prague, CZ)
  • Raffaela Mirandola (Polytechnic University of Milan, IT) [dblp]
  • Amir Molzam Sharifloo (Polytechnic University of Milan, IT)
  • Diego Perez (University of Zaragoza, ES)
  • Enrico Randazzo (University of Rome "Tor Vergata", IT)
  • Ralf H. Reussner (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Séverine Sentilles (Mälardalen University - Västerås, SE) [dblp]
  • Catia Trubiani (University of L'Aquila, IT) [dblp]
  • Petr Tuma (Charles University - Prague, CZ) [dblp]
  • André van Hoorn (Universität Kiel, DE) [dblp]
  • Aneta Vulgarakis (Mälardalen University - Västerås, SE)
  • Dennis Westermann (SAP SE - Karlsruhe, DE)
  • Dmitrijs Zaparanuks (University of Lugano, CH)

Classification
  • early quality prediction
  • model-driven analysis
  • software analysis