24.01.16 - 29.01.16, Seminar 16042
Eyewear Computing – Augmenting the Human with Head-mounted Wearable Assistants
Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.
This Dagstuhl Seminar aims to bring together researchers interested in exploring how smart eyewear can fundamentally change the way we interact with each other as well as computing systems on our body and in the world around us. Smart eyewear, such as head-mounted displays, head-worn eye trackers, or egocentric vision devices has recently emerged as promising platform for a range of different research fields as well as commercial applications. The first generation of smart eyewear was too bulky to be worn on a regular basis in daily life, expensive, and their battery lifetime severely limited their use to short durations of time. The latest generation, such as Google Glass and the recently announced Microsoft Hololens, are lightweight, allow for long-term use, and look more and more like normal glasses. They represent a new class of wearable systems that have the potential to impact all aspects of our life in transformational ways – the workplace, family life, education, and wellbeing.
This seminar will bring together researchers from a wide range of disciplines, including mobile and ubiquitous computing, mobile eye tracking, computer vision, human-computer interaction, optics, human vision and perception, privacy and security, usability, as well as systems research. We aim to discuss recent advances in eyewear computing and to explore the research challenges of this emerging technology, as well as to develop a research agenda of smart eyewear as a new paradigm for human augmentation. In particular, the seminar will focus on three core aspects of smart eyewear:
- Sensing and feedback: Smart eyewear poses significant challenges for sensing users, their interactions with other people, as well as the (visual world) around them. Which sensing modalities are the most interesting or promising to capture user behaviors from users’ point of view? Which other technologies, such as haptic, audio, display and sensors, would be interesting to be integrated in smart eyewear?
- Analysis: Smart eyewear enables long-term recordings in daily life environments. How can we analyze the abundance of information collected from the user’s perspective efficiently? What are the core analysis challenges and which advances in computational methods are required to address these? What are the most important or promising aspects of human behaviors? How can different modalities be analyzed and modelled jointly?
- Applications: While eyewear computing is a promising technology it is still lacking use cases and application scenarios (think “killer applications”). Which are the most promising applications? Which impact could smart eyewear have in other fields, such as psychology, sociology, or cognitive science? What are the social implications and privacy concerns of smart eyewear being used ever more widely?