Press Review (German only)
Wie aus Handys persönliche Assistenten werden sollen
Interview Wolfgang Back vom Computerclub Zwei, mit Prof. Dr. Bernt Schiele (TU Darmstadt); Folge 67, 10.09.07;
- Download 32 Kbit/s (~7 MB)
Press ReleaseWie aus Handys persönliche Assistenten werden
The ubiquity and miniaturization of mobile sensing devices as well as the increase of computational power of mobile and handheld devices has led to a large increase in research, algorithms and applications in the area. In the near future mobile imaging technology will become ubiquitous such as in camera phones, vision enhanced handhelds, and wearable cameras. Beyond passive data display and transmission, future information technologies will provide smart mobile services being capable of real-time analysis, purposeful selection and interpretation of enormous quantities of sensed and retrieved data. Image understanding for mobile multimodal interfaces would make new approaches possible in object recognition, context awareness, and augmented reality, aiming towards application scenarios of personal assistance, mobile work, and assistive services. The research challenges are not only efficiency, speed and low-complexity algorithms, but also additional demands on robustness of interpretation because of the mobility of the devices as well as the impact of dynamically changing and noisy conditions within urban environments. Since multi-sensor information analysis affords cueing and indexing into databases with nowadays huge information spaces, the sensed data have to be processed in an intelligent way to provide "in time delivery" of the requested relevant information. Knowledge has to be applied in an intelligent way about what needs to be attended to, and when, and what to do in a meaningful sequence, in correspondence with multi-sensor feedback.
Mobile interfaces relate these non-trivial system aspects to the dimensions of human presence, coupling interaction patterns in human behaviours with the system requirements of the multimodal interface. This will bind future mobile technologies with the emerging science on artificial cognitive systems that focuses research towards a technology that can interpret information, act purposefully and autonomously towards achieving goals. The development of this science already borrows insights from the bio-sciences and artificial intelligence, and provides a new framework with revolutionizing insights about perception, understanding, interaction, learning and knowledge representation. As we rely more and more on complex systems in mobile interaction with both real and virtual environments, we will need cognitive technologies to cope with in a focused and structured way.
The goal of this seminar is to provide an interdisciplinary forum for researchers from Ubiquitous Computing, Artificial Intelligence, Artificial Cognitive Systems, and Cognitive Sciences to communicate and discuss the scientific challenges in designing mobile cognitive technologies for urban scenarios.
The key questions to be discussed are:
- What are efficient methodologies to represent contextual knowledge from multi-sensor information?
- How is contextual knowledge applied in mobile interface technology?
- What design strategies are successful in mobile attentive interfaces?
- What is the contribution of machine learning to prediction based mobile services?
- Will existing models of spatial cognition enhance mobile interface technologies?
- What kind of knowledge representation do we use to support dynamic user interaction?
- Which level of abstraction should be selected for the filtering of the incoming stream of multimodal information?
- What is the human style of interaction in typical urban scenarios involving mobile interfaces?
- What are the specific challenges for computer vision on mobile imagery and how can AI contribute to corresponding system solutions?
With respect to the highly interdisciplinary background of the theme, the structure of the program should on the one hand enable the participants to be fully informed about the key scientific viewpoints via overview lectures. We think of daily key talks (50’) that will be followed by shorter presentations about individual, complementary views (30’), together making up a lecture session on a specific theme. On the other hand, small discussion and working groups will be involved to exchange their views on the presented concepts and work in an interactive way. Finally, poster sessions will be offered in order to reinforce the understanding of the individual viewpoints in an even more personal way.
- Katrin Amlacher (Joanneum Research - Graz, AT)
- Michael Beigl (TU Braunschweig, DE) [dblp]
- Ulf Blanke (TU Darmstadt, DE) [dblp]
- Trevor Darrell (MIT - Cambridge, US) [dblp]
- Andrew Davison (Imperial College London, GB)
- Matthias Deller (DFKI - Kaiserslautern, DE)
- Jan-Olof Eklundh (KTH Royal Institute of Technology, SE)
- Dieter Fox (University of Washington - Seattle, US) [dblp]
- Christian Freksa (Universität Bremen, DE) [dblp]
- Tam Huynh (TU Darmstadt, DE)
- Gudrun Klinker (TU München, DE) [dblp]
- Ales Leonardis (University of Ljubljana, SI) [dblp]
- Vincent Lepetit (EPFL - Lausanne, CH)
- Rainer Malaka (Universität Bremen, DE)
- Nuria Oliver (Microsoft Research - Redmond, US) [dblp]
- Dusan Omercevic (University of Ljubljana, SI)
- Lucas Paletta (Joanneum Research - Graz, AT)
- Roland Perko (University of Ljubljana, SI)
- Bernt Schiele (TU Darmstadt, DE) [dblp]
- Dieter Schmalstieg (TU Graz, AT) [dblp]
- Ted Selker (MIT - Cambridge, US)
- Kristy Sim (KTH Royal Institute of Technology, SE)
- Ulrich Steinhoff (TU Darmstadt, DE)
- Lukasz Taborowski (Tele Atlas - Poland, PL)
- Kristof Van Laerhoven (TU Darmstadt, DE) [dblp]
- Linde Vande Velde (Tele Atlas - Ghent, BE)
- Christian Wallraven (MPI für biologische Kybernetik - Tübingen, DE)
- Diedrich Wolter (Universität Bremen, DE) [dblp]
- artificial intelligence / robotics
- computer graphics / computer vision
- mobile computing
- multimodal interfaces
- cognitive systems
- context awareness
- mobile imaging
- recognition of places