- Smooth eye movement interaction using EOG glasses : article in ICMI 2016 Proceedings of the 18th ACM International Conference on Multimodal Interaction : pp. 307-311 - Dhuliawala, Murtaza; Lee, Juyoung; Shimizu, Junichi; Kunze, Kai; Starner, Thad; Woo, Woontack; Bulling, Andreas - New York : ACM, 2016 - (Proceedings of the 18th ACM International Conference on Multimodal Interaction ; article).
- Solar system : smooth pursuit interactions using EOG glasses : article in UbiComp '16 Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct : pp. 369-372 - Shimizu, Junichi; Lee, Juyoung; Dhuliawala, Murtaza; Bulling, Andreas; Starner, Thad; Kunze, Kai; Woo, Woontack - New York : ACM, 2016 - (UbiComp '16 Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing ; article).
- Solfege hand sign recognition with smart glasses : paper presented at First International Workshop on Egocentric Perception, Interaction, and Computing (EPIC 2016) : Oct 9, 2016 - Sörös, Gabor; Giger, Julia; Song, Jie - Amsterdam - Zürich : Universität, 2016. - 3 pp. - International Workshop on Egocentric Perception, Interaction, and Computing (EPIC 2016) : article.
This Dagstuhl Seminar aims to bring together researchers interested in exploring how smart eyewear can fundamentally change the way we interact with each other as well as computing systems on our body and in the world around us. Smart eyewear, such as head-mounted displays, head-worn eye trackers, or egocentric vision devices has recently emerged as promising platform for a range of different research fields as well as commercial applications. The first generation of smart eyewear was too bulky to be worn on a regular basis in daily life, expensive, and their battery lifetime severely limited their use to short durations of time. The latest generation, such as Google Glass and the recently announced Microsoft Hololens, are lightweight, allow for long-term use, and look more and more like normal glasses. They represent a new class of wearable systems that have the potential to impact all aspects of our life in transformational ways – the workplace, family life, education, and wellbeing.
This seminar will bring together researchers from a wide range of disciplines, including mobile and ubiquitous computing, mobile eye tracking, computer vision, human-computer interaction, optics, human vision and perception, privacy and security, usability, as well as systems research. We aim to discuss recent advances in eyewear computing and to explore the research challenges of this emerging technology, as well as to develop a research agenda of smart eyewear as a new paradigm for human augmentation. In particular, the seminar will focus on three core aspects of smart eyewear:
- Sensing and feedback: Smart eyewear poses significant challenges for sensing users, their interactions with other people, as well as the (visual world) around them. Which sensing modalities are the most interesting or promising to capture user behaviors from users’ point of view? Which other technologies, such as haptic, audio, display and sensors, would be interesting to be integrated in smart eyewear?
- Analysis: Smart eyewear enables long-term recordings in daily life environments. How can we analyze the abundance of information collected from the user’s perspective efficiently? What are the core analysis challenges and which advances in computational methods are required to address these? What are the most important or promising aspects of human behaviors? How can different modalities be analyzed and modelled jointly?
- Applications: While eyewear computing is a promising technology it is still lacking use cases and application scenarios (think “killer applications”). Which are the most promising applications? Which impact could smart eyewear have in other fields, such as psychology, sociology, or cognitive science? What are the social implications and privacy concerns of smart eyewear being used ever more widely?
Computing devices worn on the human body have a long history in academic and industrial research, most importantly in wearable computing, mobile eye tracking, and mobile mixed and augmented reality. In contrast to traditional systems, body-worn devices are always with the user and therefore have the potential to perceive the world and reason about it from the user's point of view. At the same time, given that on-body computing is subject to ever-changing usage conditions, on-body computing also poses unique research challenges.
This is particularly true for devices worn on the head. As humans receive most of their sensory input via the head, it is a particularly interesting body location for simultaneous sensing and interaction as well as cognitive assistance. Early egocentric vision devices were rather bulky, expensive, and their battery lifetime severely limited their use to short durations of time. Building on existing work in wearable computing, recent commercial egocentric vision devices and mobile eye trackers, such as Google Glass, PUPIL, and J!NS meme, pave the way for a new generation of "smart eyewear" that are light-weight, low-power, convenient to use, and increasingly look like ordinary glasses. This last characteristic is particularly important as it makes these devices attractive for the general public, thereby holding the potential to provide a research and product platform of unprecedented scale, quality, and flexibility.
While hearing aids and mobile headsets became widely accepted as head-worn devices, users in public spaces often consider novel head-attached sensors and devices as uncomfortable, irritating, or stigmatising. Yet with the advances in the following technologies, we believe eyewear computing will be a very prominent research field in the future:
- Increase in storage/battery capacity and computational power allows users to run eyewear computers continuously for more than a day (charging over night) gathering data to enable new types of life-logging applications.
- Miniaturization and integration of sensing, processing, and interaction functionality can enable a wide array of applications focusing on micro-interactions and intelligent assistance.
- Recent advances in real-life tracking of cognitive activities (e.g. reading, detection of fatigue, concentration) are additional enabling technologies for new application fields towards a quantified self for the mind. Smart eyewear and recognizing cognitive states go hand in hand, as naturally most research work in this field requires sensors.
- Cognitive scientists and psychologists have now a better understanding of user behavior and what induces behavior change. Therefore, smart eyewear could help users in achieving behaviour change towards their long term goals.
Eyewear computing has the potential to fundamentally transform the way machines perceive and understand the world around us and to assist humans in measurably and significantly improved ways. The seminar brought together researchers from a wide range of computing disciplines, such as mobile and ubiquitous computing, head-mounted eye tracking, optics, computer vision, human vision and perception, privacy and security, usability, as well as systems research. Attendees discussed how smart eyewear can change existing research and how it may open up new research opportunities. For example, future research in this area could fundamentally change our understanding of how people interact with the world around them, how to augment these interactions, and may have a transformational impact on all spheres of life -- the workplace, family life, education, and psychological well-being.
- Andreas Bulling (MPI für Informatik - Saarbrücken, DE) [dblp]
- Ozan Cakmakci (Google Inc. - Mountain View, US) [dblp]
- Rita Cucchiara (University of Modena, IT) [dblp]
- Steven K. Feiner (Columbia University, US) [dblp]
- Kristen Grauman (University of Texas - Austin, US) [dblp]
- Scott Greenwald (MIT - Cambridge, US) [dblp]
- Sabrina Hoppe (MPI für Informatik - Saarbrücken, DE) [dblp]
- Masahiko Inami (University of Tokyo, JP) [dblp]
- Shoya Ishimaru (Osaka Prefecture University, JP) [dblp]
- Moritz Kassner (Pupil Labs - Berlin, DE) [dblp]
- Koichi Kise (Osaka Prefecture University, JP) [dblp]
- Kiyoshi Kiyokawa (Osaka University, JP) [dblp]
- Kai Kunze (Keio University - Yokohama, JP) [dblp]
- Paul Lukowicz (DFKI - Kaiserslautern, DE) [dblp]
- Päivi Majaranta (University of Tampere, FI) [dblp]
- Walterio W. Mayol-Cuevas (University of Bristol, GB) [dblp]
- René Mayrhofer (Universität Linz, AT) [dblp]
- Masashi Nakatani (University of Tokyo, JP) [dblp]
- Will Patera (Pupil Labs - Berlin, DE) [dblp]
- Thies Pfeiffer (Universität Bielefeld, DE) [dblp]
- James M. Rehg (Georgia Institute of Technology - Atlanta, US) [dblp]
- Philipp Scholl (Universität Freiburg, DE) [dblp]
- Linda B. Smith (Indiana University - Bloomington, US) [dblp]
- Gábor Sörös (ETH Zürich, CH) [dblp]
- Thad Starner (Georgia Institute of Technology - Atlanta, US) [dblp]
- Julian Steil (MPI für Informatik - Saarbrücken, DE) [dblp]
- Yusuke Sugano (MPI für Informatik - Saarbrücken, DE) [dblp]
- Yuji Uema (J!NS - Tokyo, JP) [dblp]
- computer graphics / computer vision
- mobile computing
- society / human-computer interaction
- Augmented Human
- Cognition-Aware Computing
- Ubiquitous Computing
- Egocentric Computer Vision
- Eye Tracking