http://www.dagstuhl.de/13431

October 20 – 25 , 2013, Dagstuhl Seminar 13431

Real-World Visual Computing

Organizers

Oliver Grau (Intel Visual Computing Institute – Saarbrücken, DE)
Marcus A. Magnor (TU Braunschweig, DE)
Olga Sorkine-Hornung (ETH Zürich, CH)
Christian Theobalt (MPI für Informatik – Saarbrücken, DE)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 3, Issue 10 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents
Dagstuhl's Impact: Documents available
Dagstuhl Seminar Schedule [pdf]

Summary

Dagstuhl seminar 13431 "Real-World Visual Computing" took place October 20-25, 2013. 45 researchers from North America, Asia, and Europe discussed the state-of-the-art, contemporary challenges, and promising future research directions in the areas of acquiring, modeling, editing, and rendering of complex natural scenes and events. The seminar was encompassed an introductory and a closing session, 9 scientific presentation sessions, two book organizational sessions as well as one special session on the Uncanny Valley problem. The seminar brought together junior and senior researchers from computer graphics, computer vision, 3D animation and visual special effects, both from academia and industry, to address the challenges in real-world visual computing. Participants included international experts from Kyoto University, Tsinghua University, University of British Columbia, University of Alberta, University of North Carolina, University of Kentucky, Yale University, Technion - Haifa, Filmakademie Baden-Wuerttemberg, Hochschule der Medien Stuttgart, Disney Research Zurich, BBC Research & Development, Intel Visual Computing Institute, Nvidia Corp., Adobe Systems Inc., metaio GmbH as well as many more research institutions and high-tech companies.

Motivating this seminar was the observation that digital models of real-world entities have become an essential part of innumerous computer graphics applications today. With ever-increasing graphics hardware and software capabilities, however, so does the demand for more and more realistically detailed models. Because the traditional, labor-intensive process of digital model creation by hand threatens to stall further progress in computer graphics, conventional manual modeling approaches are giving way to new approaches that aim at capturing complex digital models directly from the real world. The seminar picked up on recent trends in acquisition hardware for real-world events (e.g., Microsoft Kinect, Lytro light field camera, swarm of smartphone sensors, ...) as well as in visual computing applications (e.g., 3D movies, Streetview, digital mock-ups, free-viewpoint systems, ...). It brought together experts from academia and industry working on contemporary challenges in image-based techniques, geometry modeling, computational photography and videography, BRDF acquisition, 3D reconstruction, 3D video, motion and performance capture etc. Collectively we fathomed the full potential of real world-based modeling approaches in computer graphics and visual computing.

Over the past decade, computer graphics has evolved into a mainstream area of computer science. Its economic impact and social pervasion range from professional training simulators to interactive entertainment, from movie production to trauma therapy, from geographic information systems to Google Earth. As a result, expectations on computer graphics performance are rising continuously. In fact, thanks to the progress in graphics hardware as well as rendering algorithms, visual realism is today within easy reach of off-the-shelf PCs, laptops, and even handheld devices. With rapidly advancing rendering capabilities, however, in many application areas of computer graphics the modeling process is becoming the limiting factor. Higher visual realism can be achieved only from more detailed and accurate scene descriptions. So far, however, digitally modeling 3D geometry and object texture, surface reflectance characteristics and scene illumination, motion and emotion is a labor-intensive, tedious process performed by highly trained animation specialists. The cost of conventionally creating models of sufficient complexity to engage the full potential of modern GPUs increasingly threatens to stall progress in computer graphics.

To overcome this bottleneck, an increasing number of researchers and engineers worldwide is investigating alternative approaches to create realistic digital models directly from real-world objects and scenes: Google and Microsoft already digitize entire cities using panorama video footage, 3D scanners, and GPS; RTT AG in Munich creates highly realistic digital mock-ups for the car industry from CAD data and measured surface reflectance characteristics of car paint; at Disney Research, algorithms are being developed to create stereoscopic movies from monocular input; and BBC R&D has developed various 3D sports visualization methods based on analyzing live-broadcast footage.

In recent years, special effects in movies and computer games have reached a new level of complexity. In their aim to construct convincing virtual environments or even virtual actors, VFX companies are more and more relying on techniques to capture models from the real world. Currently available reconstruction tools, however, are still in their infancy. A lot of time is still spent on manual post-processing and modeling. The research community has responded to this trend by investigating new image- and video-based scene reconstruction approaches that can capture richer and more complex models. An example are performance capture methods that estimate more detailed shape and motion models of dynamics scenes than do commercially available systems. Similar methods for reconstruction of entire sets are also currently investigated, but many algorithmic problems remain to be solved.

The trend towards model capture from real world-examples is additionally bolstered by new sensor technologies becoming available at mass-market prices, such as Microsoft's Kinect, time-of-flight 2D depth imagers, or Lytro's Light Field camera. Also the pervasiveness of smartphones containing a camera, GPS, and orientation sensors allows for developing new capturing paradigms of real-world events based on a swarm of networked handheld devices. With the advent of these exciting novel acquisition technologies, investigating how to best integrate these new capture modalities into the computer graphics modeling pipeline, or how to alter traditional modeling to make optimal use of the new capture approaches, has become a top priority in visual computing research.

Researchers working on all of these problems from different direction came together at the seminar to share their experiences and discuss the scientific challenges. Questions discussed were both theoretical and practical in nature. The seminar participants discussed the contemporary scientific challenges in modeling from the real world and determined which research avenues are likely to be the most promising and interesting ones to pursue over the course of the next years.

Among the questions and issues that have been addressed in the seminar are how to capitalize on new sensors for capture (computational cameras, light field cameras, Time-of-flight sensors, Kinect, omni-visual systems, ...), how to capture different object/scene aspects (geometry, reflectance, texture, material/fabric, illumination, dynamics, ...), how to digitally represent real-world objects/scenes (meshes, voxels, image-based, animation data, ...), how to convincingly & intuitively manipulate real-world models (relighting, motion editing, constrained manipulation, sketch-based, example-based, ...), how to realistically compose/augment and real-time render new scenes (F/X, movie post-production, games, perceptual issues, ...), and how to exploit the immense amount of community image and video data that are captured with handheld devices to build detailed models of the world (buildings, acting/dancing performances, sports events, fish tanks, ...). Also, the challenges arising from the large data sets of real-world models have been addressed. A special session on perceptional issues in animation (the Uncanny Valley problem) set out to identify the most important factors that are still unrealistic in computer animation. As the single most important area, facial animation was identified and some research directions for improvements were discussed.

The overall goal of the seminar to form a lasting, interdisciplinary research community was impressively underlined by the willingness of many seminar participants to work together on an edited book on the topic of the seminar. The book will be published with CRC Press. Completion of the manuscript is scheduled for August 2014.

License
  Creative Commons BY 3.0 Unported license
  Oliver Grau, Marcus A. Magnor, Olga Sorkine-Hornung, and Christian Theobalt

Classification

  • Computer Graphics / Computer Vision

Keywords

  • Appearance-based modeling
  • Computational Photography
  • Image-based modeling and rendering
  • Geometry processing
  • Motion capture

Book exhibition

Books from the participants of the current Seminar 

Book exhibition in the library, ground floor, during the seminar week.

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support