TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 19272

Real VR – Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays

( 30. Jun – 03. Jul, 2019 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/19272

Organisatoren

Kontakt

Dagstuhl Seminar Wiki

Gemeinsame Dokumente



Programm

Motivation

Motivated by the advent of mass-market VR headsets, this Dagstuhl Seminar addresses the scientific and engineering challenges that need to be overcome in order to experience omni-directional video recordings of the real world with the sense of stereoscopic, full-parallax immersion as can be provided by today’s head-mounted displays.

Since the times of the Lumière brothers, the way we watch movies hasn’t fundamentally changed: Whether in movie theaters, on mobile devices, or on TV at home, we still experience movies as outside observers, watching the action through a “peephole” whose size is defined by the angular extent of the screen. As soon as we look away from the screen or turn around, we are immediately reminded that we are only “voyeurs”. With modern full-field-of-view, head-mounted and tracked VR displays, this outside-observer paradigm of visual entertainment is quickly giving way to a fully immersive experience. Now, the action fully encompasses the viewer, drawing us in much more than was possible before.

For the time being, however, current endeavors towards immersive visual entertainment are based almost entirely on 3D graphics-generated content, limiting application scenarios to purely digital, virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, which are both essential for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished with 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events.

Another challenge is that consumer-grade VR headsets feature spatial resolutions that are still considerably below foveal acuity, yielding a pixelated, subpar immersive viewing experience. At the same time, the visual perception characteristics of our fovea are decidedly different from our peripheral vision (as regards spatial and temporal resolution, color, contrast, clutter disambiguation etc.). So far, computer graphics research has focused almost entirely on foveal perception, even though our peripheral vision accounts for 99% of our field of view. To optimize perceived visual quality of head-mounted immersive displays, and to make optimal use of available computational resources, advanced VR rendering algorithms need to simultaneously account for our foveal and peripheral vision characteristics.

The aim of the seminar is to collectively fathom what needs to be done to facilitate truly immersive viewing of real-world recordings and how to enhance the immersive viewing experience by taking perceptual aspects into account. The topic touches on research aspects from various fields, ranging from digital imaging, video processing, and computer vision to computer graphics, virtual reality, and visual perception. The seminar brings together scientists, engineers and practitioners from industry and academia to form a lasting, interdisciplinary research community who set out to jointly address the challenges of Real VR.

Copyright Marcus A. Magnor and Alexander Sorkine-Hornung

Summary

The Dagstuhl seminar brought together 27 researchers and practitioners from academia and industry to discuss the state-of-the-art, current challenges, as well as promising future research directions in Real VR. Real VR, as defined by the seminar participants, pursues two overarching goals: facilitating the import of real-world scenes into head-mounted displays (HMDs), and attaining perceptual realism in HMDs. The vision of Real VR is enabling to experience movies, concerts, even live sports events in HMDs with the sense of immersion of really "being-there", unattainable by today’s technologies.

In the welcome and overview session, the participants collectively decided on the seminar program for the following days. In total, the seminar program included the overview session, three research presentation sessions, two breakout sessions including a demo track, two sessions for one-on-one discussions and individual exchange, one session for writing up the results, plus the summary and closing session.

To kick off the seminar, Alexander Sorkine-Hornung from Oculus VR presented the latest developments from an industrial perspective. He gave insights from the development of the just-released Oculus Quest and Oculus Rift S HMDs. In the research presentation sessions, 21 participants gave talks on their work. Participants also met in smaller groups in the breakout sessions to discuss the specific challenges of these fields in more detail. In due course, it became apparent that Real VR concerns research challenges in a number of different fields:

  • Capture
  • Reconstruction & modeling
  • Rendering & perception
  • Display technologies
  • Interaction & virtual avatars
  • Production & applications

Some exemplary results of the seminar on these topics were:

The persistent lack of consumer-market, i.e. affordable, mid- to high-resolution 360-degree video cameras to capture dynamic real-world scenes omnidirectionally still hamper research and development in Real VR. So far, research groups largely build their own custom-designed omnidirectional video cameras. Prominent examples include the omnidirectional camera designs by the group of Philippe Bekaert from Hasselt University, Belgium, and the top-of-the-line Manifold camera presented by Brian Cabral from Facebook. Besides novel devices, also simpler recording methods are sought, e.g. by Tobias Bertel and Christian Richardt at Bath, in order to capture real-world content more casually.

On scene reconstruction and representation, the jury is still out whether omnidirectional video should be considered to represent sparse light field data with dense depth/disparity as side information, or whether panoramic footage should (and could) be processed to provide full 3D geometry representations of the scene. As pointed out by Atanas Gotchev from TU Tampere, Marco Volino from the University of Surrey, and Christian Richardt from the University of Bath, both forms of representation have their respective advantages and drawbacks, e.g. when aiming to augment the real scene with additional virtual content.. Memory requirements and real-time streaming bandwidth requirements are challenging in either case.

The form of scene representation also determines which rendering approaches are viable. For 3D rendering, Dieter Schmalstieg from Graz presented his Shading Atlas Streaming approach to efficiently divide shading and rendering computation between server and client. To make use of visual perception characteristics in wide field-of-view HMDs, on the other hand, foveated rendering approaches, e.g. based on hardware ray tracing and accelerated machine learning, as presented by Anjul Patney from NVidia, have great potential. As shown by Qi Sun from Adobe, perceptual methods like saccade-aware rendering can also be used to enable walking through huge virtual worlds while actually not leaving the confines of one’s living room. To render from dense depth-annotated 360-deg video, in contrast, advanced image-based warping methods and hole-filling approaches are needed, as was convincingly outlined by Tobias Bertel from the University of Bath.

Gordon Wetzstein from Stanford University presented how future HMDs will become even more realistic by overcoming current limitations of near-eye displays, in particular the vergence-accommodation conflict. Along similar lines, Hansung Kim from the University of Surrey showed how spatial audio enhances perceived VR realism even more.

Social interaction in the virtual world requires having digital doubles available. The elaborate steps needed to create convincing human avatars from real-world people were outlined by Feng Xu from Tsinghua University, Darren Cosker from the University of Bath, Christian Theobalt from MPII, and Peter Eisert from TU Berlin, covering the full range of human face, hand and body capture, reconstruction, and modeling. To interact with objects in virtual space, on the other hand, Erroll Wood from Microsoft Cambridge described how hand motion and gestures can be reliably tracked and identified in real-time by the upcoming HoloLens 2 device. Also based on real-time tracking, Li-Yi Wei from Adobe presented a system that enables presenters to augment their live presentation by interacting with the shown content in real-time using mere hand gestures and body postures.

Regarding content production and applications, Christian Lipski from Apple presented the ARKit software framework developed for creating captivating augmented reality experiences. James Tompkin from Brown University presented work on multi-view camera editing of Real VR content during post-production. Johanna Pirker from TU Graz showed how virtual reality can be paired with human-computer interaction to enhance learning experiences in the physics classroom. Production aspects and cinematic VR experiences were also considered prominent drivers of contemporary Real VR research by other presenters, e.g. Marco Volino, Darren Cosker, Philippe Bekaert, Peter Eisert and Brian Cabral.

Practically experiencing the new, tetherless Oculus Quest brought along by Alexander Sorkine-Hornung in the demonstration track made impressively clear how free, unrestricted user motion extends the usability and acceptance of VR tremendously, made possible by the pass-through view feature of this HMD.

Finally, in the coming months, a number of seminar participants will compile an edited book volume on the state-of-the-art in Real VR that Springer has already agreed to publish as part of their well-known Lecture Notes on Computer Science (LNCS) Survey Series.

Copyright Marcus A. Magnor and Alexander Sorkine-Hornung

Teilnehmer
  • Philippe Bekaert (Hasselt University - Diepenbeek, BE) [dblp]
  • Tobias Bertel (University of Bath, GB) [dblp]
  • Brian Cabral (Facebook - Menlo Park, US) [dblp]
  • Susana Castillo Alejandre (TU Braunschweig, DE) [dblp]
  • Darren Cosker (University of Bath, GB) [dblp]
  • Douglas Cunningham (BTU Cottbus-Senftenberg, DE) [dblp]
  • Peter Eisert (Fraunhofer-Institut - Berlin, DE) [dblp]
  • Atanas Gotechev (Tampere University of Technology, FI)
  • Adrian Hilton (University of Surrey - Guildford, GB) [dblp]
  • Moritz Kappel (TU Braunschweig, DE) [dblp]
  • Hansung Kim (University of Surrey - Guildford, GB) [dblp]
  • Christian Lipski (Apple Computer Inc. - Cupertino, US) [dblp]
  • Marcus A. Magnor (TU Braunschweig, DE) [dblp]
  • Anjul Patney (Facebook - Redmond, US) [dblp]
  • Johanna Pirker (TU Graz, AT) [dblp]
  • Christian Richardt (University of Bath, GB) [dblp]
  • Dieter Schmalstieg (TU Graz, AT) [dblp]
  • Alexander Sorkine-Hornung (Oculus VR - Zürich, CH) [dblp]
  • Frank Steinicke (Universität Hamburg, DE) [dblp]
  • Qi Sun (Adobe Inc. - San José, US) [dblp]
  • Christian Theobalt (MPI für Informatik - Saarbrücken, DE) [dblp]
  • James Tompkin (Brown University - Providence, US) [dblp]
  • Marco Volino (University of Surrey, GB) [dblp]
  • Li-Yi Wei (Adobe Inc. - San José, US) [dblp]
  • Gordon Wetzstein (Stanford University, US) [dblp]
  • Erroll Wood (Microsoft Research - Cambridge, GB) [dblp]
  • Feng Xu (Tsinghua University Beijing, CN) [dblp]

Klassifikation
  • computer graphics / computer vision

Schlagworte
  • Real-world Virtual Reality
  • Immersive Digital Reality
  • Perception in VR