TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 12491

Interpreting Observed Action

( Dec 02 – Dec 07, 2012 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/12491

Organizers

Contact

Dagstuhl Seminar Wiki


Schedule

Summary

For many applications of smart embedded software systems, the system should sense the footprint of a human or humans acting in the system's environment, interpret the sensor data in terms of some semantic model about what the human is doing, and respond appropriately in real time. Examples of such applications include smart homes, human-machine or human-robot interaction, assistance, surveillance, and tutoring systems; given the current trend towards ambient intelligence, ubiquitous computing, and sensor networks, the number of systems in these categories can certainly by expected to rise in the next ten years or so.

The problem shares many features with classical object recognition and scene reconstruction from sensor data in terms of a static scene model. Interpreting in semantic terms sensor data from the environment has a long tradition in AI - arguably, it was one of the original core problems put forth by AI's founding fathers. However, the problem of interpreting observed action in the sense of the seminar differs in some aspects from what state-of-the-art AI or engineering approaches would allow to be tackled by routine:

  • Events in space-time rather than static objects need to be characterized. This necessarily involves some representation and model of temporal and spatial data (e.g., the human put a saucepan on the cooker and then turned the cooker on).
  • Real-time processing of the sensor data or percepts is required to keep track of what is happening. In fact, "real time" here is the pace of human action, i.e., relatively slow compared to CPU clock ticks. However, given a potentially rich stream of sensor data and a potentially large body of background knowledge, even this pace is demanding for the respective knowledge processing methods.
  • Willed human action, be it planned, intended, or customary, is the domain of interpretation. In Knowledge Representation, this is a relatively unexplored area, compared to, say, upper ontologies of household items, red wine, or pizza varieties.

Contemplating the three words that make up the title of this seminar ("interpret", "observe", and "act"), it becomes clear that there are a number of issues that need to be addressed in this context. Firstly, any interpretation is to some degree subjective and uses a particular repertoire of basic actions in its language. Secondly, an observation uses a particular type of sensor data and often is not possible without interpretation at the same time. Thirdly, there are issues around what actions are to be considered:

  • Are only willed and physical action to be considered?
  • Is avoidance an action?
  • What constitutes an action in the first place?
  • When does a particular action end?
  • Is an unsuccessful action an action?

In summary, what precisely is observed action interpretation and what would be benchmark data for it?

To find an answer to this question, the participants of the seminar emerged themselves in a variety of activities: technical talks, working groups, plenary discussions, and a number of informal discussions. In the rest of this report, some of these activities and their results are discussed in more detail.


Participants
  • Sven Albrecht (Universität Osnabrück, DE)
  • Mehul Bhatt (Universität Bremen, DE) [dblp]
  • Susanne Biundo-Stephan (Universität Ulm, DE)
  • Martin V. Butz (Universität Tübingen, DE) [dblp]
  • Amedeo Cesta (CNR - Rome, IT) [dblp]
  • Krishna Sandeep Reddy Dubba (University of Leeds, GB) [dblp]
  • Tom Duckett (University of Lincoln, GB)
  • Frank Dylla (Universität Bremen, DE)
  • Simone Frintrop (Universität Bonn, DE)
  • Martin A. Giese (Universitätsklinikum Tübingen, DE)
  • Hans Werner Guesgen (Massey University, NZ) [dblp]
  • Verena V. Hafner (HU Berlin, DE) [dblp]
  • Joachim Hertzberg (Universität Osnabrück, DE) [dblp]
  • Alexandra Kirsch (Universität Tübingen, DE) [dblp]
  • Geert-Jan M. Kruijff (DFKI - Saarbrücken, DE)
  • Stephen R. Marsland (Massey University, NZ)
  • Bernd Neumann (Universität Hamburg, DE) [dblp]
  • Heiko Neumann (Universität Ulm, DE)
  • Lucas Paletta (Joanneum Research - Graz, AT)
  • Bernd Schattenberg (Universität Ulm, DE)
  • Aryana Tavanai (University of Leeds, GB) [dblp]
  • Sabine Timpf (Universität Augsburg, DE) [dblp]
  • Thomas Wiemann (Universität Osnabrück, DE) [dblp]
  • Diedrich Wolter (Universität Bremen, DE) [dblp]

Classification
  • artificial intelligence / robotics
  • society / HCI

Keywords
  • action
  • knowledge representation
  • plan recognition
  • symbol grounding
  • perception
  • behavior interpretation