TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 13431

Real-World Visual Computing

( 20. Oct – 25. Oct, 2013 )


Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/13431

Organisatoren

Kontakt



Programm

Motivation

Over the last decade, the tremendous increase in computational power of graphics hardware, in conjunction with equally improved rendering algorithms, have led to the situation today where real-time visual realism is computationally attainable on almost any PC, if only the digital models to be rendered were sufficiently detailed and realistic.

With rapidly advancing rendering capabilities, the modeling process has become the limiting factor in realistic computer graphics applications. Following the traditional rendering paradigm, higher visual realism can be attained only by providing more detailed and accurate scene descriptions. However, building realistic digital scene descriptions consisting of 3D geometry and object texture, surface reflectance characteristics and scene illumination, character motion and emotion is a highly labor-intensive, tedious process.

Goal of this seminar is to find new ways to overcome the looming stalemate in realistic rendering caused by traditional, time-consuming modeling. One promising alternative consists of creating digital models from real-world examples if ways can be found how to endow reconstructed models with the flexibility customary in computer graphics. The trend towards model capture from real-world examples is bolstered by new sensor technologies becoming available at mass-market prices, such as Microsoft’s Kinect and time-of-flight 2D depth imagers, or Lytro’s Light Field camera. Also, the pervasiveness of smart-phones containing camera, GPS and orientation sensors allows for developing new capturing paradigms of real-world events based on a swarm of networked smart-phones. With the advent of these exciting new acquisition technologies, investigating how to best integrate these novel capture modalities into the digital modeling pipeline or how to alter traditional modeling to make optimal use of new capture technologies, has become a top priority in visual computing research.

To address these challenges, interdisciplinary approaches are called for that encompass computer graphics, computer vision, and visual media production. Seminar participants will include scientists and practitioners from all these fields, experts as well as young researchers, from academia and industry. The overall goal of the seminar is to form a lasting, interdisciplinary research community which jointly identifies and addresses the challenges in modeling from the real world and determines which research avenues will be the most promising ones to pursue over the course of the next years.


Summary

Dagstuhl seminar 13431 "Real-World Visual Computing" took place October 20-25, 2013. 45 researchers from North America, Asia, and Europe discussed the state-of-the-art, contemporary challenges, and promising future research directions in the areas of acquiring, modeling, editing, and rendering of complex natural scenes and events. The seminar was encompassed an introductory and a closing session, 9 scientific presentation sessions, two book organizational sessions as well as one special session on the Uncanny Valley problem. The seminar brought together junior and senior researchers from computer graphics, computer vision, 3D animation and visual special effects, both from academia and industry, to address the challenges in real-world visual computing. Participants included international experts from Kyoto University, Tsinghua University, University of British Columbia, University of Alberta, University of North Carolina, University of Kentucky, Yale University, Technion - Haifa, Filmakademie Baden-Wuerttemberg, Hochschule der Medien Stuttgart, Disney Research Zurich, BBC Research & Development, Intel Visual Computing Institute, Nvidia Corp., Adobe Systems Inc., metaio GmbH as well as many more research institutions and high-tech companies.

Motivating this seminar was the observation that digital models of real-world entities have become an essential part of innumerous computer graphics applications today. With ever-increasing graphics hardware and software capabilities, however, so does the demand for more and more realistically detailed models. Because the traditional, labor-intensive process of digital model creation by hand threatens to stall further progress in computer graphics, conventional manual modeling approaches are giving way to new approaches that aim at capturing complex digital models directly from the real world. The seminar picked up on recent trends in acquisition hardware for real-world events (e.g., Microsoft Kinect, Lytro light field camera, swarm of smartphone sensors, ...) as well as in visual computing applications (e.g., 3D movies, Streetview, digital mock-ups, free-viewpoint systems, ...). It brought together experts from academia and industry working on contemporary challenges in image-based techniques, geometry modeling, computational photography and videography, BRDF acquisition, 3D reconstruction, 3D video, motion and performance capture etc. Collectively we fathomed the full potential of real world-based modeling approaches in computer graphics and visual computing.

Over the past decade, computer graphics has evolved into a mainstream area of computer science. Its economic impact and social pervasion range from professional training simulators to interactive entertainment, from movie production to trauma therapy, from geographic information systems to Google Earth. As a result, expectations on computer graphics performance are rising continuously. In fact, thanks to the progress in graphics hardware as well as rendering algorithms, visual realism is today within easy reach of off-the-shelf PCs, laptops, and even handheld devices. With rapidly advancing rendering capabilities, however, in many application areas of computer graphics the modeling process is becoming the limiting factor. Higher visual realism can be achieved only from more detailed and accurate scene descriptions. So far, however, digitally modeling 3D geometry and object texture, surface reflectance characteristics and scene illumination, motion and emotion is a labor-intensive, tedious process performed by highly trained animation specialists. The cost of conventionally creating models of sufficient complexity to engage the full potential of modern GPUs increasingly threatens to stall progress in computer graphics.

To overcome this bottleneck, an increasing number of researchers and engineers worldwide is investigating alternative approaches to create realistic digital models directly from real-world objects and scenes: Google and Microsoft already digitize entire cities using panorama video footage, 3D scanners, and GPS; RTT AG in Munich creates highly realistic digital mock-ups for the car industry from CAD data and measured surface reflectance characteristics of car paint; at Disney Research, algorithms are being developed to create stereoscopic movies from monocular input; and BBC R&D has developed various 3D sports visualization methods based on analyzing live-broadcast footage.

In recent years, special effects in movies and computer games have reached a new level of complexity. In their aim to construct convincing virtual environments or even virtual actors, VFX companies are more and more relying on techniques to capture models from the real world. Currently available reconstruction tools, however, are still in their infancy. A lot of time is still spent on manual post-processing and modeling. The research community has responded to this trend by investigating new image- and video-based scene reconstruction approaches that can capture richer and more complex models. An example are performance capture methods that estimate more detailed shape and motion models of dynamics scenes than do commercially available systems. Similar methods for reconstruction of entire sets are also currently investigated, but many algorithmic problems remain to be solved.

The trend towards model capture from real world-examples is additionally bolstered by new sensor technologies becoming available at mass-market prices, such as Microsoft's Kinect, time-of-flight 2D depth imagers, or Lytro's Light Field camera. Also the pervasiveness of smartphones containing a camera, GPS, and orientation sensors allows for developing new capturing paradigms of real-world events based on a swarm of networked handheld devices. With the advent of these exciting novel acquisition technologies, investigating how to best integrate these new capture modalities into the computer graphics modeling pipeline, or how to alter traditional modeling to make optimal use of the new capture approaches, has become a top priority in visual computing research.

Researchers working on all of these problems from different direction came together at the seminar to share their experiences and discuss the scientific challenges. Questions discussed were both theoretical and practical in nature. The seminar participants discussed the contemporary scientific challenges in modeling from the real world and determined which research avenues are likely to be the most promising and interesting ones to pursue over the course of the next years.

Among the questions and issues that have been addressed in the seminar are how to capitalize on new sensors for capture (computational cameras, light field cameras, Time-of-flight sensors, Kinect, omni-visual systems, ...), how to capture different object/scene aspects (geometry, reflectance, texture, material/fabric, illumination, dynamics, ...), how to digitally represent real-world objects/scenes (meshes, voxels, image-based, animation data, ...), how to convincingly & intuitively manipulate real-world models (relighting, motion editing, constrained manipulation, sketch-based, example-based, ...), how to realistically compose/augment and real-time render new scenes (F/X, movie post-production, games, perceptual issues, ...), and how to exploit the immense amount of community image and video data that are captured with handheld devices to build detailed models of the world (buildings, acting/dancing performances, sports events, fish tanks, ...). Also, the challenges arising from the large data sets of real-world models have been addressed. A special session on perceptional issues in animation (the Uncanny Valley problem) set out to identify the most important factors that are still unrealistic in computer animation. As the single most important area, facial animation was identified and some research directions for improvements were discussed.

The overall goal of the seminar to form a lasting, interdisciplinary research community was impressively underlined by the willingness of many seminar participants to work together on an edited book on the topic of the seminar. The book will be published with CRC Press. Completion of the manuscript is scheduled for August 2014.

Copyright Oliver Grau, Marcus A. Magnor, Olga Sorkine-Hornung, and Christian Theobalt

Teilnehmer
  • Philippe Bekaert (Hasselt University - Diepenbeek, BE) [dblp]
  • Tamy Boubekeur (Télécom ParisTech, FR) [dblp]
  • Edmond Boyer (INRIA - Grenoble, FR) [dblp]
  • Gabriel Brostow (University College London, GB) [dblp]
  • Darren Cosker (University of Bath, GB) [dblp]
  • Carsten Dachsbacher (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Robert Dawes (BBC - London, GB) [dblp]
  • Jean-Michel Dischler (University of Strasbourg, FR) [dblp]
  • Bernd Eberhardt (Hochschule der Medien - Stuttgart, DE) [dblp]
  • Peter Eisert (Fraunhofer-Institut - Berlin, DE) [dblp]
  • Paolo Favaro (Universität Bern, CH) [dblp]
  • Dieter W. Fellner (TU Darmstadt, DE) [dblp]
  • Jan-Michael Frahm (University of North Carolina at Chapel Hill, US) [dblp]
  • Martin Fuchs (Universität Stuttgart, DE) [dblp]
  • Bastian Goldlücke (Universität Heidelberg, DE) [dblp]
  • Oliver Grau (Intel Visual Computing Institute - Saarbrücken, DE) [dblp]
  • Volker Helzle (Filmakademie Baden-Württemberg - Ludwigsburg, DE) [dblp]
  • Anna Hilsmann (HU Berlin, DE) [dblp]
  • Adrian Hilton (University of Surrey, GB) [dblp]
  • Martin Jagersand (University of Alberta - Edmonton, CA) [dblp]
  • Jan Kautz (University College London, GB) [dblp]
  • Oliver Klehm (MPI für Informatik - Saarbrücken, DE) [dblp]
  • Felix Klose (TU Braunschweig, DE) [dblp]
  • Andreas Kolb (Universität Siegen, DE) [dblp]
  • Hendrik P. A. Lensch (Universität Tübingen, DE) [dblp]
  • Christian Lipski (metaio GmbH - München, DE) [dblp]
  • Yebin Liu (Tsinghua University - Beijing, CN) [dblp]
  • Céline Loscos (Universite de Reims, FR) [dblp]
  • Marcus A. Magnor (TU Braunschweig, DE) [dblp]
  • Shohei Nobuhara (Kyoto University, JP) [dblp]
  • Sylvain Paris (Adobe Systems Inc. - Cambridge, US) [dblp]
  • Fabrizio Pece (University College London, GB) [dblp]
  • Kari Pulli (NVIDIA Corp. - Santa Clara, US) [dblp]
  • Bodo Rosenhahn (Leibniz Universität Hannover, DE) [dblp]
  • Holly E. Rushmeier (Yale University, US) [dblp]
  • Hans-Peter Seidel (MPI für Informatik - Saarbrücken, DE) [dblp]
  • Alla Sheffer (University of British Columbia - Vancouver, CA) [dblp]
  • Philipp Slusallek (DFKI - Saarbrücken, DE) [dblp]
  • Alexander Sorkine-Hornung (Disney Research - Zürich, CH) [dblp]
  • Olga Sorkine-Hornung (ETH Zürich, CH) [dblp]
  • Ayellet Tal (Technion - Haifa, IL) [dblp]
  • Christian Theobalt (MPI für Informatik - Saarbrücken, DE) [dblp]
  • Stefanie Wuhrer (Universität des Saarlandes, DE) [dblp]
  • Ruigang Yang (University of Kentucky - Lexington, US) [dblp]
  • Remo Ziegler (LiberoVision AG - Zürich, CH) [dblp]

Klassifikation
  • computer graphics / computer vision

Schlagworte
  • Appearance-based modeling
  • Computational Photography
  • Image-based modeling and rendering
  • Geometry processing
  • Motion capture