June 30 – July 3 , 2019, Dagstuhl Seminar 19272

Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays


Marcus A. Magnor (TU Braunschweig, DE)
Alexander Sorkine-Hornung (Oculus VR – Zürich, CH)

For support, please contact

Dagstuhl Service Team


Dagstuhl Report, Volume 9, Issue 6 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents
Dagstuhl's Impact: Documents available
Dagstuhl Seminar Wiki
Dagstuhl Seminar Schedule [pdf]

(Use seminar number and access code to log in)


The Dagstuhl seminar brought together 27 researchers and practitioners from academia and industry to discuss the state-of-the-art, current challenges, as well as promising future research directions in Real VR. Real VR, as defined by the seminar participants, pursues two overarching goals: facilitating the import of real-world scenes into head-mounted displays (HMDs), and attaining perceptual realism in HMDs. The vision of Real VR is enabling to experience movies, concerts, even live sports events in HMDs with the sense of immersion of really "being-there", unattainable by today’s technologies.

In the welcome and overview session, the participants collectively decided on the seminar program for the following days. In total, the seminar program included the overview session, three research presentation sessions, two breakout sessions including a demo track, two sessions for one-on-one discussions and individual exchange, one session for writing up the results, plus the summary and closing session.

To kick off the seminar, Alexander Sorkine-Hornung from Oculus VR presented the latest developments from an industrial perspective. He gave insights from the development of the just-released Oculus Quest and Oculus Rift S HMDs. In the research presentation sessions, 21 participants gave talks on their work. Participants also met in smaller groups in the breakout sessions to discuss the specific challenges of these fields in more detail. In due course, it became apparent that Real VR concerns research challenges in a number of different fields:

  • Capture
  • Reconstruction & modeling
  • Rendering & perception
  • Display technologies
  • Interaction & virtual avatars
  • Production & applications

Some exemplary results of the seminar on these topics were:

The persistent lack of consumer-market, i.e. affordable, mid- to high-resolution 360-degree video cameras to capture dynamic real-world scenes omnidirectionally still hamper research and development in Real VR. So far, research groups largely build their own custom-designed omnidirectional video cameras. Prominent examples include the omnidirectional camera designs by the group of Philippe Bekaert from Hasselt University, Belgium, and the top-of-the-line Manifold camera presented by Brian Cabral from Facebook. Besides novel devices, also simpler recording methods are sought, e.g. by Tobias Bertel and Christian Richardt at Bath, in order to capture real-world content more casually.

On scene reconstruction and representation, the jury is still out whether omnidirectional video should be considered to represent sparse light field data with dense depth/disparity as side information, or whether panoramic footage should (and could) be processed to provide full 3D geometry representations of the scene. As pointed out by Atanas Gotchev from TU Tampere, Marco Volino from the University of Surrey, and Christian Richardt from the University of Bath, both forms of representation have their respective advantages and drawbacks, e.g. when aiming to augment the real scene with additional virtual content.. Memory requirements and real-time streaming bandwidth requirements are challenging in either case.

The form of scene representation also determines which rendering approaches are viable. For 3D rendering, Dieter Schmalstieg from Graz presented his Shading Atlas Streaming approach to efficiently divide shading and rendering computation between server and client. To make use of visual perception characteristics in wide field-of-view HMDs, on the other hand, foveated rendering approaches, e.g. based on hardware ray tracing and accelerated machine learning, as presented by Anjul Patney from NVidia, have great potential. As shown by Qi Sun from Adobe, perceptual methods like saccade-aware rendering can also be used to enable walking through huge virtual worlds while actually not leaving the confines of one’s living room. To render from dense depth-annotated 360-deg video, in contrast, advanced image-based warping methods and hole-filling approaches are needed, as was convincingly outlined by Tobias Bertel from the University of Bath.

Gordon Wetzstein from Stanford University presented how future HMDs will become even more realistic by overcoming current limitations of near-eye displays, in particular the vergence-accommodation conflict. Along similar lines, Hansung Kim from the University of Surrey showed how spatial audio enhances perceived VR realism even more.

Social interaction in the virtual world requires having digital doubles available. The elaborate steps needed to create convincing human avatars from real-world people were outlined by Feng Xu from Tsinghua University, Darren Cosker from the University of Bath, Christian Theobalt from MPII, and Peter Eisert from TU Berlin, covering the full range of human face, hand and body capture, reconstruction, and modeling. To interact with objects in virtual space, on the other hand, Erroll Wood from Microsoft Cambridge described how hand motion and gestures can be reliably tracked and identified in real-time by the upcoming HoloLens 2 device. Also based on real-time tracking, Li-Yi Wei from Adobe presented a system that enables presenters to augment their live presentation by interacting with the shown content in real-time using mere hand gestures and body postures.

Regarding content production and applications, Christian Lipski from Apple presented the ARKit software framework developed for creating captivating augmented reality experiences. James Tompkin from Brown University presented work on multi-view camera editing of Real VR content during post-production. Johanna Pirker from TU Graz showed how virtual reality can be paired with human-computer interaction to enhance learning experiences in the physics classroom. Production aspects and cinematic VR experiences were also considered prominent drivers of contemporary Real VR research by other presenters, e.g. Marco Volino, Darren Cosker, Philippe Bekaert, Peter Eisert and Brian Cabral.

Practically experiencing the new, tetherless Oculus Quest brought along by Alexander Sorkine-Hornung in the demonstration track made impressively clear how free, unrestricted user motion extends the usability and acceptance of VR tremendously, made possible by the pass-through view feature of this HMD.

Finally, in the coming months, a number of seminar participants will compile an edited book volume on the state-of-the-art in Real VR that Springer has already agreed to publish as part of their well-known Lecture Notes on Computer Science (LNCS) Survey Series.

Summary text license
  Creative Commons BY 3.0 Unported license
  Marcus A. Magnor and Alexander Sorkine-Hornung


  • Computer Graphics / Computer Vision


  • Real-world Virtual Reality
  • Immersive Digital Reality
  • Perception in VR


In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.


Download overview leaflet (PDF).

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.


Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.