TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 9328

"Average-Case"-Analysis of Algorithms

( Jul 12 – Jul 16, 1993 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/9328

Organizers
  • H. Prodinger
  • Ph. Flajolet
  • R. Kemp



Summary

Analysis of algorithms aims at characterizing precisely the performances of major data structures and algorithms of computer science. The emphasis of the meeting was on average-case analysis. The domain was founded by Knuth some 31 years ago. He first showed -amongst many other things!- that most basic sorting and searching methods can be precisely analyzed and organized into a complexity scale with respect to expected performance.

This meeting was the first one ever to be dedicated exclusively to analysis of algorithms. The number of invited participants was 37, of which 30 gave presentations of recent results summarized in the Dagstuhl-Seminar-Report. The talks could be grouped roughly as dealing with Methods or Applications, both aspects being often closely intertwined.

Methods concern combinatorial models of random structures that are relevant to major algorithms and data structures. Recent years have seen important developments in this area where key parameters of trees, permutations, strings, and so on, are beginning to be organized into coherent theoretical frameworks. Current approaches largely revolve around generating functions as the basic object both for combinatorial enumerations and for asymptotic analysis.

Presentations related to Methods started with the talk of Don Knuth who showed how generating function techniques permit to quantify the evolution of random graphs. Flajolet surveyed the use of Mellin transforms for asymptotics with applications in digital trees, protocols, sorting networks, etc.; Kirschenhofer revealed deep connections between some of Ramanujan's modular identities and digital trees; Sprugnoli presented a coherent framewok for asymptotics of classes of generating functions; Kemp and Trier developed generating function methods for multidimensional tree models while Bergeron showed how to systematically analyze increasing trees in this way. Another facet was the connection between average-case analysis and probabilistic methods, examples being presented by Louchard (stochastic processes), Gutjahr (random trees and branching processes), and Drmota (limit distributions and complex multivariate asymptotics).

Applications by now extend far beyond sorting and searching, although the classical problems still contribute a considerable source of inspiration and problems. For instance, it is unknown to this day whether the asymptotic distribution of costs of Quicksort is expressible in terms of standard special functions, though many characteristics of the distribution became recently available (tails, higher moments, etc). A 20 years old problem was solved by Sedgewick who finally settled the question of the average-case complexity of heapsort. Compton showed how the Ramanujan-Knuth function is essential in understanding deadlock effects in computer systems; Salvy used classic hypergeometric theory and integral representations to analyze quadtrees of all dimensions; Vallée demonstrated how the lattice reduction algorithm of Gauß can be subjected to precise average-case analysis; finally Panny showed that even a simple algorithm like Mergerort has a hidden fractal behaviour.

Analysis also makes tangible progress when confronted with new classes of problems. So, the behaviour of data compression algorithms is by now better understood (Jacquet, Szpankowski, Vitter); it becomes possible to precisely quantify the complexity of string matching algorithms (Régnier) with possible implications for DNA sequencing (Gonnet). Skip lists are a provably efficient alternative to balanced trees (Prodinger), and methods originally directed towards random tree models turn out to be also useful in understanding real-time systems (Schmid). Actually, efficient simulation algorithms may well result from rather theoretical frameworks (Zimmermann). Exotic applications are even to be found in fault tolerance (Sipala) and numerical. integration (Tichy).

At the same time, we are witnessing progress in probabilistic analysis whose scope goes beyond average-case estimates. Direct probabilistic methods in the style of the Hungarian school (Erdös et al.) apply well to random graph and combinatorial optimization problems. Some clear cases are the knapsack problem (Marchetti), merging algorithms (De La Vega), interconnection networks (Spirakis), or hypergraph algorithms (Althofer). Perhaps even more cross-fertilization is to be expected in future years. The already mentioned conferences of Knuth, Drmota, Louchard, and Gutjahr, as well as Mahmoud's work (see his recent book on random search trees) point towards interesting convergences with classical analytic frameworks.

Across the diversity of applications, we find more and more visible the emergence of general methods that permit to analyze what one could call complex algorithmic systems. We can only hope that this small booklet of abstracts will convey some of the participants' enthusiasm with a maturing 31 years old subject.

Copyright

Participants
  • H. Prodinger
  • Ph. Flajolet
  • R. Kemp