https://www.dagstuhl.de/01111

11. – 16. März 2001, Dagstuhl-Seminar 01111

Methodology of Evaluation in Computational Medical Imaging

Organisatoren

Kevin W. Bowyer (University of South Florida, US)
Murray H. Loew (George Washington University, US)
H. Siegfried Stiehl (Universität Hamburg, DE)
Max Viergever (Utrecht University, NL)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team

Dokumente

Teilnehmerliste
Dagstuhl-Seminar-Report 301

Motivation

About one decade ago, Yannis Aloimonos complained that "Unfortunately, there is a disconcerting lack of visual systems which perform well in real-world environments, particularly when compared to the amount of mathematical theory published on the subject.4 a complaint which not only holds for computational vision in general but in particular also for the safety-critical case of computational medical imaging (a terminus technicus which commonly subsumes medical image formation, processing, analysis, interpretation, and visualization). One reason for this unfortunate situation is clearly the fact that the experimental basis of computational vision as a scientific discipline is still rather weak. As a down-to-earth-consequence, e.g., it is by no means clear for an industrial system designer, on which grounds she/he should rely on a particular algorithm, method, or proposed tool once she/he is faced with the problem of putting academic research to work. Neither it seems to be clear for a clinician, what kind of as well as what degree or quality of support in his routine work she/he can expect from proffered computational medical imaging (CMI) tools claimed to support routine work. Put in other words, CMI seen as a coin has a shiny and scientifically rewarding theory side but a rather rusty, not to say puny, practice side.

Meanwhile in the CMI community a growing awareness of the fact can be observed that evaluation aiming at performance characterization is a critical issue. In a complementing way, a strong need from both clinical and industrial actors for tackling theoretical as well as experimental problems associated with these issues has to be stated, since dissemination of theoretical advances into practical settings requires a deep understanding of assets, limitations, application scope, etc. of CMI algorithms, methods, and tools. Moreover it is also safe to state that without such a deep understanding gained from a scientific approach the design of interactive CMI systems will be severely hampered, since human-centered efficient interaction should take place on the basis of results of computational processes which are trustworthy - ideally results, which are consistent with theoretical proofs of a computational theory. In contrast to other domains drawing upon visual data, CMI stands out for reasons of required safety, accuracy, robustness, ergonomy, etc. Apart from that, CMI is seen as a major future high-tech market also, hence the development of successful products strongly depends on bridging the gap between theory, experiment, and practice. Obviously, solutions to these problems reside in a space composed of multiple dimensions to name a few: CMI theory, practice of CMI (incl. design of algorithms and visual data structures), clinical requirement analysis, and industrial platform constraints.

Due to the lack of well-grounded, internationally accepted, and standardized methods for evaluation and given the specifity of CMI as briefly sketched above, it is high-time to bring together leading experts from the CMI community in the inspiring atmosphere of Schloss Dagstuhl to discuss the state-of-the-art/technology as well as routes to be jointly taken in the near future. After the successful first seminar on more general issues of performance characterization in computational vision in 1998 and given the most recent publications on domain-unspecific topics of evaluation in computational vision (see e.g. "Empirical Evaluation Techniques in Computer Vision4, IEEE Computer Society Press, 1998, edited by K.W. Bowyer and J. Phillips as well as "Performance Characterization and Evaluation of Computer Vision Algorithms4, Kluwer Academic Publishers, 2000, edited by R. Klette, H.S. Stiehl, M.A. Viergever and K.L. Vincken), this seminar will focus on particular domain-specific (!) issues as related to medical imagery, e.g. performance characterization of computational processes for segmentation, analysis, registration, and real-time visualization of multi-dimensional and multi-modal images.

In terms of priority, focus will be set on the following concrete topics:

  1. identification of clinical, methodical, and technical desiderata (related to accuracy, precision, real-time performance, degree of automation/interaction, etc. incl. requirement analysis/specification w.r.t. classes of tasks from different clinical domains)
  2. analysis of algorithms w.r.t. resource consumption, complexity, convergence, stability, range of admissible input data, etc.
  3. validation of accuracy, robustness, etc. of algorithms for interactive/semi-automated/automatic segmentation, analysis, registration, and visualization of medical imagery
  4. theoretical/methodological issues such as definition of ground truth and gold standards, value of phantoms, imaging simulators, and synthetic test data, issue of certification of algorithms
  5. selection of a representative set of clinical routine images related to specific domains and tasks (certified clinical reference cases and test image data base)
  6. definition of test beds, experimental strategies, performance measures, etc.
  7. definition of internationally standardized benchmarks

One of the main goals of the seminar is to contribute towards a more seamless methodology of validation, evaluation, and performance characterization across various levels - thus to contribute also to bridge the gap between CMI theory and the end user. As a complement to the scientific point-of-view both the industrial and clinical vistas will also be presented which will certainly provoke fruitful discussions beyond the ivory tower.

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.