TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 12491

Interpreting Observed Action

( 02. Dec – 07. Dec, 2012 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/12491

Organisatoren

Kontakt

Dagstuhl Seminar Wiki


Programm

Summary

For many applications of smart embedded software systems, the system should sense the footprint of a human or humans acting in the system's environment, interpret the sensor data in terms of some semantic model about what the human is doing, and respond appropriately in real time. Examples of such applications include smart homes, human-machine or human-robot interaction, assistance, surveillance, and tutoring systems; given the current trend towards ambient intelligence, ubiquitous computing, and sensor networks, the number of systems in these categories can certainly by expected to rise in the next ten years or so.

The problem shares many features with classical object recognition and scene reconstruction from sensor data in terms of a static scene model. Interpreting in semantic terms sensor data from the environment has a long tradition in AI - arguably, it was one of the original core problems put forth by AI's founding fathers. However, the problem of interpreting observed action in the sense of the seminar differs in some aspects from what state-of-the-art AI or engineering approaches would allow to be tackled by routine:

  • Events in space-time rather than static objects need to be characterized. This necessarily involves some representation and model of temporal and spatial data (e.g., the human put a saucepan on the cooker and then turned the cooker on).
  • Real-time processing of the sensor data or percepts is required to keep track of what is happening. In fact, "real time" here is the pace of human action, i.e., relatively slow compared to CPU clock ticks. However, given a potentially rich stream of sensor data and a potentially large body of background knowledge, even this pace is demanding for the respective knowledge processing methods.
  • Willed human action, be it planned, intended, or customary, is the domain of interpretation. In Knowledge Representation, this is a relatively unexplored area, compared to, say, upper ontologies of household items, red wine, or pizza varieties.

Contemplating the three words that make up the title of this seminar ("interpret", "observe", and "act"), it becomes clear that there are a number of issues that need to be addressed in this context. Firstly, any interpretation is to some degree subjective and uses a particular repertoire of basic actions in its language. Secondly, an observation uses a particular type of sensor data and often is not possible without interpretation at the same time. Thirdly, there are issues around what actions are to be considered:

  • Are only willed and physical action to be considered?
  • Is avoidance an action?
  • What constitutes an action in the first place?
  • When does a particular action end?
  • Is an unsuccessful action an action?

In summary, what precisely is observed action interpretation and what would be benchmark data for it?

To find an answer to this question, the participants of the seminar emerged themselves in a variety of activities: technical talks, working groups, plenary discussions, and a number of informal discussions. In the rest of this report, some of these activities and their results are discussed in more detail.


Teilnehmer
  • Sven Albrecht (Universität Osnabrück, DE)
  • Mehul Bhatt (Universität Bremen, DE) [dblp]
  • Susanne Biundo-Stephan (Universität Ulm, DE)
  • Martin V. Butz (Universität Tübingen, DE) [dblp]
  • Amedeo Cesta (CNR - Rome, IT) [dblp]
  • Krishna Sandeep Reddy Dubba (University of Leeds, GB) [dblp]
  • Tom Duckett (University of Lincoln, GB)
  • Frank Dylla (Universität Bremen, DE)
  • Simone Frintrop (Universität Bonn, DE)
  • Martin A. Giese (Universitätsklinikum Tübingen, DE)
  • Hans Werner Guesgen (Massey University, NZ) [dblp]
  • Verena V. Hafner (HU Berlin, DE) [dblp]
  • Joachim Hertzberg (Universität Osnabrück, DE) [dblp]
  • Alexandra Kirsch (Universität Tübingen, DE) [dblp]
  • Geert-Jan M. Kruijff (DFKI - Saarbrücken, DE)
  • Stephen R. Marsland (Massey University, NZ)
  • Bernd Neumann (Universität Hamburg, DE) [dblp]
  • Heiko Neumann (Universität Ulm, DE)
  • Lucas Paletta (Joanneum Research - Graz, AT)
  • Bernd Schattenberg (Universität Ulm, DE)
  • Aryana Tavanai (University of Leeds, GB) [dblp]
  • Sabine Timpf (Universität Augsburg, DE) [dblp]
  • Thomas Wiemann (Universität Osnabrück, DE) [dblp]
  • Diedrich Wolter (Universität Bremen, DE) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • society / HCI

Schlagworte
  • action
  • knowledge representation
  • plan recognition
  • symbol grounding
  • perception
  • behavior interpretation