TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 13061

Fault Prediction, Localization, and Repair

( 03. Feb – 08. Feb, 2013 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/13061

Organisatoren

Kontakt


Motivation

Even today, an unpredictable part of the total effort devoted to software development is spent on debugging, i.e., on finding and fixing bugs. This is despite the fact that powerful static checkers are routinely employed, finding many bugs before a program is first executed, and also despite the fact that modern software is often assembled from pieces (libraries, frameworks, etc.) that have already stood the test of time. In fact, while experienced developers are usually quick at finding and fixing their own bugs, they too spend too much time with fixing the interplay of components that have never been used in combination before, or just debugging the code of others. Better automated support for predicting, locating, and repairing bugs is therefore still required.

Due to the omnipresence of bugs on the one side and the vastly varying nature of bugs on the other, the problems of fault prediction, localization, and repair have attracted research from many different communities, each relying on their individual strengths. However, often enough localizing a bug resembles the solution of a criminal case in that no single procedure or evidence is sufficient to identify the culprit unambiguously. It is therefore reasonable to expect that the best result can only be obtained from the combination of (insufficient) evidence obtained by different, and ideally independent, procedures. One main goal of this seminar is therefore to connect the many different strands of research on fault prediction, localization, and repair.

For researchers it is not always obvious how debugging is embedded in the software production process. For instance, while ranking suspicious program statements according to the likelihood of their faultiness may seem like a sensible thing to do from a research perspective, programmers may not be willing to look at more than a handful of such locations when they have their own inkling of where a bug might be located. On the other hand, commercial programmers may not be aware of the inefficiency of their own approaches to debugging, for which promising alternatives have been developed by academics. Bringing together these two different perspectives is another goal of this seminar.

Last but not least, the growing body of open source software, and with it the public availability of large regression test suites, provide unprecedented possibilities for researchers to evaluate their approaches on industrial-quality benchmarks. In fact, while standard benchmarks such as the so-called Siemens test suite still pervade the scientific literature on debugging, generalization of experimental results obtained on such a small basis is more than questionable. Other disciplines, such as the model checking or the theorem proving communities, have long established competitions based on open benchmarks to which anyone can submit their problems. Based on such benchmarks, progress would be objectively measurable, and advances in research would be better visible. It is another goal of this seminar to establish a common understanding for the need of such benchmarks, and also to initiate the standards necessary for installing them.


Summary

Even today, an unpredictable part of the total effort devoted to software development is spent on debugging, i.e., on finding and fixing bugs. This is despite the fact that powerful static checkers are routinely employed, finding many bugs before a program is first executed, and also despite the fact that modern software is often assembled from pieces (libraries, frameworks, etc.) that have already stood the test of time. In fact, while experienced developers are usually quick at finding and fixing their own bugs, they too spend too much time with fixing the interplay of components that have never been used in combination before, or just debugging the code of others. Better automated support for predicting, locating, and repairing bugs is therefore still required.

Due to the omnipresence of bugs on the one side and the vastly varying nature of bugs on the other, the problems of fault prediction, localization, and repair have attracted research from many different communities, each relying on their individual strengths. However, often enough localizing a bug resembles the solution of a criminal case in that no single procedure or evidence is sufficient to identify the culprit unambiguously. It is therefore reasonable to expect that the best result can only be obtained from the combination of (insufficient) evidence obtained by different, and ideally independent, procedures. One main goal of this seminar is therefore to connect the many different strands of research on fault prediction, localization, and repair.

For researchers it is not always obvious how debugging is embedded in the software production process. For instance, while ranking suspicious program statements according to the likelihood of their faultiness may seem like a sensible thing to do from a research perspective, programmers may not be willing to look at more than a handful of such locations when they have their own inkling of where a bug might be located. On the other hand, commercial programmers may not be aware of the inefficiency of their own approaches to debugging, for which promising alternatives have been developed by academics. Bringing together these two different perspectives is another goal of this seminar.

Last but not least, the growing body of open source software, and with it the public availability of large regression test suites, provide unprecedented possibilities for researchers to evaluate their approaches on industrial-quality benchmarks. In fact, while standard benchmarks such as the so-called Siemens test suite still pervade the scientific literature on debugging, generalization of experimental results obtained on such a small basis is more than questionable. Other disciplines, such as the model checking or the theorem proving communities, have long established competitions based on open benchmarks to which anyone can submit their problems. Based on such benchmarks, progress would be objectively measurable, and advances in research would be better visible. It is another goal of this seminar to establish a common understanding for the need of such benchmarks, and also to initiate the standards necessary for installing them.

Copyright Mary Jean Harrold, Friedrich Steimann, Frank Tip, and Andreas Zeller

Teilnehmer
  • Rui Abreu (University of Porto, PT) [dblp]
  • Shay Artzi (IBM - Littleton, US) [dblp]
  • George K. Baah (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Benoit Baudry (INRIA Rennes - Bretagne Atlantique, FR) [dblp]
  • Margaret M. Burnett (Oregon State University, US) [dblp]
  • Satish Chandra (IBM India - Bangalore, IN) [dblp]
  • Jake Cobb (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Julian Dolby (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
  • Marcus Frenkel (FernUniversität in Hagen, DE) [dblp]
  • Vijay Ganesh (University of Waterloo, CA) [dblp]
  • Milos Gligoric (University of Illinois - Urbana-Champaign, US) [dblp]
  • Alessandra Gorla (Universität des Saarlandes, DE) [dblp]
  • Mangala Gowri Nanda (IBM India Research Lab. - New Delhi, IN) [dblp]
  • Christian Hammer (Universität des Saarlandes, DE) [dblp]
  • Mary Jean Harrold (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Jens Krinke (University College London, GB) [dblp]
  • Ben Liblit (University of Wisconsin - Madison, US) [dblp]
  • Rupak Majumdar (MPI-SWS - Kaiserslautern, DE) [dblp]
  • Martin Monperrus (INRIA - University of Lille 1, FR) [dblp]
  • Alessandro Orso (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Marco Pistoia (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
  • Andy H. Podgurski (Case Western Reserve University - Cleveland, US) [dblp]
  • Jeremias Rößler (Universität des Saarlandes, DE) [dblp]
  • Abhik Roychoudhury (National University of Singapore, SG) [dblp]
  • Barbara G. Ryder (Virginia Polytechnic Institute - Blacksburg, US) [dblp]
  • Hesam Samimi (UCLA, US) [dblp]
  • Friedrich Steimann (Fernuniversität in Hagen, DE) [dblp]
  • Lin Tan (University of Waterloo, CA) [dblp]
  • Frank Tip (University of Waterloo, CA) [dblp]
  • Emina Torlak (University of California - Berkeley, US) [dblp]
  • Cemal Yilmaz (Sabanci University - Istanbul, TR) [dblp]
  • Andreas Zeller (Universität des Saarlandes, DE) [dblp]
  • Xiangyu Zhang (Purdue University - West Lafayette, US) [dblp]
  • Thomas Zimmermann (Microsoft Corporation - Redmond, US) [dblp]

Klassifikation
  • Programming languages/compiler
  • Sw-engineering
  • Verification/logic

Schlagworte
  • Program analysis
  • Automated debugging
  • Fault prediction
  • Fault repair
  • Fault localization
  • Statistical debugging
  • Change impact analysis