Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Innerhalb dieser Seite:
Externe Seiten:
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp

Dagstuhl-Seminar 9719

Social Science Microsimulation: Tools for Modeling, Parameter Optimization, and Sensitivity Analysis

( 05. May – 09. May, 1997 )

Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite:

  • K. Troitzsch (Koblenz)
  • N. Gilbert (Guildford)
  • R. Suleiman (Haifa)
  • U. Mueller (Marburg)

  • Tools and techniques for social science simulation - Suleiman, Ramzi; Troitzsch, Klaus G.; Gilbert, Nigel - Heidelberg : Physica-Verlag, 2000. - VIII, 387 S.. ISBN: 3-7908-1265-X.


The Dagstuhl Seminar 9518 on Social Science Microsimulation: A Challenge for Computer Science — which all participants estimated as a very successful first attempt to bring together social scientists engaged in simulation and computer scientists interested in social science applications — resulted, among others, in unanimity in two respects: that there is still a lack in modeling tools adapted to the practice of social science modeling, and that there is a lack in experimental tools for testing simulation results against real world data, for parameter optimization and sensitivity analysis. This is why we would like to organize another seminar especially devoted to modelers’ activities before and after simulation proper.

In our opinion, user friendly tools for designing simulation models (e.g. in a graphical user interface) and for evaluating results from a large number of simulation runs with varying parameters and/or initializations and/or random number generator seeds is still a challenge for computer science.

The dynamic nature of simulation and the complexity of the models relevant to the social sciences means that difficult problems arise for the design of appropriate human-computer interfaces, especially since some users will be rather inexperienced with the use of software tools. The user interface will need to be configurable by the user, provide easy to use diagnostics, and be capable of reacting in real or near real time. The design of graphical user interfaces with these characteristics poses a significant challenge for HCI. While a similar challenge has been successfully met with simulators for laboratory experiments, social science simulators raise a host of new problems, including the need to devise new interface metaphors.

Modeling Tools The advantages of using a special simulation toolkit are that less skill and less effort are required to build simulations, graphical interfaces are provided for the builder, so avoiding difficult and time consuming effort reconstructing common output displays such as graphs, and the resulting code can be easier for others to understand. However, all such environments limit the range of models which they can be used to build. At present, the limitations which these environments impose on researchers seem to be such that most do not use them. Those that do, tend to be the people who created the toolkits in the first place. However, this may be a temporary state of affairs, as better toolkits are developed, they are made more powerful and more flexible, and as they spread through the research community.

A prerequisite for making simulation toolkits more attractive for researchers in social sciences is that they lend themselves to an easy, intuitive and may be even informal input of the model structure which can best be achieved by a graphical user interface. While tools of this kind have long been available for the classical System Dynamics approach (and for discrete event simulation which does not seem to be the main field of social science simulationists), they are not available for those modelling and simulation approaches prevailing now among social science simulationists: multilevel modelling and DAI modelling, where models have still to be designed using high level formal languages like MIMOSE or SmallTalk which in most cases do not support reuse of modules, such that new models or testbeds have to be programmed from scratch each time.

cause most users of these toolkits will be social, not computer scientists, it is important to develop modelling languages which express cleanly the social science issues. This will involve the development of new formal languages, ones which focus primarily on the interaction between objects, rather than the properties of the objects themselves. Moreover, it would be desirable if these languages were compatible with being expressed visually — thus leading to a visual language which could be readily understandable by social scientists — and which had constraints ‘built in’ (e.g. through metaphor) to hinder the user from making syntactic and, possibly, semantic mistakes. The design of such languages is on the research edge of today’s computer science. Research on the design of such languages would benefit the social science community, more generally those involved in building complex simulations in other disciplines, and those in the computer science community for whom the challenge would be a helpful spur to new ideas on the design of high level languages.

The seminar should give both social science simulationists and computer scientists an opportunity to learn about the respective needs and achievements. Participants should discuss which experience from computer science simulation (and from simulation in other realms of science more successful or less demanding in this respect) can be used for designing more powerful user interfaces for modelling needs in social science.

Model Validation Most simulations generate very large quantities of output as they are run and social science simulations (which may be simulating the actions of hundreds or thousands of agents) are no exception. New tools for displaying such output in ways which reveals ‘interesting’ patterns are urgently needed. Existing visualisation tools fall into two classes: those which are able to display large quantities of highly clustered or correlated data (e.g. those used to visualise the output from simulations in the physical sciences) and those which can be used for the visualisation of small quantities of data with a high error component (e.g. psychological data about a few individuals). Social science data tends to fit into neither of these categories, typically consisting of very large quantities of data that also has a large ‘error’ component. New ideas are needed to deal with such data.

All three levels of model validation — finding optimal parameter values for a given model, finding optimal model specification for a given set of parameters, finding the optimal set of parameters — require decision procedures and algorithms which even for the most basic models which can be solved analytically are in very limited supply.

The validation problem is that, once a simulation has been designed and run — with a largeamount of output —, one needs to know whether:

  • the parameters and starting configuration of the simulation are the best ones;
  • the output from the simulation resembles what has been or could be observed in the world;
  • the same or similar results would be obtained if the simulation were run again with slightly different parameters;
  • the model is the simplest that yields the desired output and is therefore to be preferred over other, more complex models.

These are all difficult questions. One cannot look to other forms of social science research for their solution, because they have not been completely answered there either. One partial solution to determining the extent to which the output depends on the particular values of the model parameters is to engage in a sensitivity analysis. However, the number of parameters in most models and the need to check all or most combinations of values of these parameters means that a thorough sensitivity analysis is usually impracticable.

Lest it be thought that these methodological problems undermine the idea of simulation as a research method, it must be emphasised that other approaches have to deal with similar problems; it is just that simulation brings them to the fore more clearly. One strategy to overcome the methodological problems is to re-orientate the objective of simulation from that of developing models which in some sense ‘fit’ the social world to that of using simulation to develop conjectures and ideas that could then be applied (perhaps using some other research methodology) to the study of human societies. But if this strategy is not or cannot be taken it is worthwhile to develop tools which support both sensitivity analysis and parameter optimization — some of which have already been incorporated into simulation toolkits, but only for applications outside the social sciences. Social sciences, however, have data problems which are more difficult to solve than in most other sciences. Thus, it would be necessary to discuss the experience from the cooperation between computer science and other sciences with similar problems, and find out which achievements can be translated from successful applications in their fields into social science applications with even more complex models and with a prevailing interest in emerging structures.

  • K. Troitzsch (Koblenz)
  • N. Gilbert (Guildford)
  • R. Suleiman (Haifa)
  • U. Mueller (Marburg)