09. – 14. September 2007, Dagstuhl-Seminar 07371

Mobile Interfaces Meet Cognitive Technologies


Jan-Olof Eklundh (KTH Royal Institute of Technology, SE)
Ales Leonardis (University of Ljubljana, SI)
Lucas Paletta (Joanneum Research – Graz, AT)
Bernt Schiele (TU Darmstadt, DE)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team



Press Room

Press Review (German only)

Press Release

Wie aus Handys persönliche Assistenten werden
5.09.07(German only)


The ubiquity and miniaturization of mobile sensing devices as well as the increase of computational power of mobile and handheld devices has led to a large increase in research, algorithms and applications in the area. In the near future mobile imaging technology will become ubiquitous such as in camera phones, vision enhanced handhelds, and wearable cameras. Beyond passive data display and transmission, future information technologies will provide smart mobile services being capable of real-time analysis, purposeful selection and interpretation of enormous quantities of sensed and retrieved data. Image understanding for mobile multimodal interfaces would make new approaches possible in object recognition, context awareness, and augmented reality, aiming towards application scenarios of personal assistance, mobile work, and assistive services. The research challenges are not only efficiency, speed and low-complexity algorithms, but also additional demands on robustness of interpretation because of the mobility of the devices as well as the impact of dynamically changing and noisy conditions within urban environments. Since multi-sensor information analysis affords cueing and indexing into databases with nowadays huge information spaces, the sensed data have to be processed in an intelligent way to provide "in time delivery" of the requested relevant information. Knowledge has to be applied in an intelligent way about what needs to be attended to, and when, and what to do in a meaningful sequence, in correspondence with multi-sensor feedback.

Mobile interfaces relate these non-trivial system aspects to the dimensions of human presence, coupling interaction patterns in human behaviours with the system requirements of the multimodal interface. This will bind future mobile technologies with the emerging science on artificial cognitive systems that focuses research towards a technology that can interpret information, act purposefully and autonomously towards achieving goals. The development of this science already borrows insights from the bio-sciences and artificial intelligence, and provides a new framework with revolutionizing insights about perception, understanding, interaction, learning and knowledge representation. As we rely more and more on complex systems in mobile interaction with both real and virtual environments, we will need cognitive technologies to cope with in a focused and structured way.

Goals and Content of the Seminar

The goal of this seminar is to provide an interdisciplinary forum for researchers from Ubiquitous Computing, Artificial Intelligence, Artificial Cognitive Systems, and Cognitive Sciences to communicate and discuss the scientific challenges in designing mobile cognitive technologies for urban scenarios.

The key questions to be discussed are:

  • What are efficient methodologies to represent contextual knowledge from multi-sensor information?
  • How is contextual knowledge applied in mobile interface technology?
  • What design strategies are successful in mobile attentive interfaces?
  • What is the contribution of machine learning to prediction based mobile services?
  • Will existing models of spatial cognition enhance mobile interface technologies?
  • What kind of knowledge representation do we use to support dynamic user interaction?
  • Which level of abstraction should be selected for the filtering of the incoming stream of multimodal information?
  • What is the human style of interaction in typical urban scenarios involving mobile interfaces?
  • What are the specific challenges for computer vision on mobile imagery and how can AI contribute to corresponding system solutions?

With respect to the highly interdisciplinary background of the theme, the structure of the program should on the one hand enable the participants to be fully informed about the key scientific viewpoints via overview lectures. We think of daily key talks (50’) that will be followed by shorter presentations about individual, complementary views (30’), together making up a lecture session on a specific theme. On the other hand, small discussion and working groups will be involved to exchange their views on the presented concepts and work in an interactive way. Finally, poster sessions will be offered in order to reinforce the understanding of the individual viewpoints in an even more personal way.


  • Artificial Intelligence / Robotics
  • Computer Graphics / Computer Vision
  • Mobile Computing


  • Multimodal interfaces
  • Cognitive systems
  • Context awareness
  • Situatedness
  • Mobile imaging
  • Localisation
  • Recognition of places


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.