TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 09431

From Form to Function

( Oct 18 – Oct 23, 2009 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/09431

Organizers



Summary

At present we are on the verge of a new era when technical systems expand from typical industrial applications with pre-programmed, hard-wired behaviors into everyday life situations where they have to deal with complex and unpredictable events. The increasing demand for robotic applications in dynamic and unstructured environments is motivating the need for novel robot sensing and adaptable robot grasping abilities. The robot needs to cope with a wide variety of tasks and objects encountered in open environments. Since humans seem to have no difficulty to estimate a rough function of an object and to plan its grasping solely from the visual input, robot vision plays a key function in the perception of a manipulation system.

Our hypothesis is that the form and shape of objects is a key factor deciding upon actions that can be performed with an object. Psychophysical studies with humans confirm that affordance of grasping includes information about object orientation, size, shape/form, and specific grasping points. Affordances are discussed as one ingredient to close the loop from perception to potential actions.

The aim of this seminar is to bring together researchers from different fields related to the goal of advancing our understanding of human and machine perception of form and function. We set out to explore findings from different disciplines to build more comprehensive and complete models and methods. Neuroscientists and experimental psychologists will provide initial conceptual findings on the selective nature of sensor processing and on how action-relevant information is extracted. Cognitive scientists will tackle the modeling of knowledge of object function and task relations. Computer vision scientists are challenged to develop procedures to achieve context-driven attention and a targeted detection of relevant form features. All participants will profit from the ideas and findings in the related disciplines and contribute towards establishing a comprehensive understanding of brain and computing processes to extract object function from form features.

  • Computer vision and perception needs to detect relevant features and structures to build up the shape/form of objects, to determine their orientation and size, and to define good grasping points. Currently, appearance has been successfully used for recognizing objects and codebooks of features assist in object categorization. Our goal is to move the data abstraction higher to define object function from the perception of edges, contours, surface properties, and other structural features, which still remains less explored. A main task of the workshop is to bring the key experts together to discuss how to advance the state of the art.
  • Attention is the mechanism to enable fast and real-time responses in humans. Studies with humans show that grasping can be performed independent of object recognition. Hence it is timely to investigate how this direct link or affordances can be modeled and replicated for exploitation in robotic and cognitive systems.
  • Prediction and the integration of bottom-up and top-down data flow is often discussed. Primate vision has been largely studied based on passively recording neuron functions when observing patterns (bottom-up stream). Only recently the importance of top-down triggers has been more closely shown. For example, 85% of axons in visual cortex do not come from the retina but other brain areas including what is thought to be higher brain regions. Recent neuro-scientific findings state that predictions are a primary function of these connections. This indicates that the human brain uses predictions to focus attention, to exploit task and context knowledge, and hence scale an otherwise too wide space of inputs. For example, prediction indicates how a shape will be perceived when a certain action is executed on the target object. The task will be to identify what is the relevant information and how can it be computed in a machine vision system.
  • Finally, humans seem to build up extensive knowledge about typical shapes and forms of whatever is seen in daily life. Seeing a partly occluded object often immediately triggers the respective model to complete the shape. Also in grasping it has been found that the grasping point on the backside of an object is typically invisible but it is inferred from a symmetry assumption. The search for objects (say cups) is focused on horizontal surfaces and exploits knowledge about object category to look in a kitchen rather than in the garage. Recent work created first databases and ontologies to describe such knowledge, yet it remains open to fuse these developments with the results listed above.

In summary, the seminar brought together scientists from disciplines such as computer science, neuroscience, robotics, developmental psychology, and cognitive science to further the knowledge how the perception of form relates to object function and how intention and task knowledge (and hence function) aids in the recognition of relevant objects.


Participants
  • Thomas Barkowsky (Universität Bremen, DE)
  • Leon Bodenhagen (University of Southern Denmark - Odense, DK)
  • Eli Brenner (Free University of Amsterdam, NL)
  • Darius Burschka (TU München, DE) [dblp]
  • Barbara Caputo (IDIAP Research Institute - Martigny, CH)
  • Anthony Cohn (University of Leeds, GB) [dblp]
  • Heiner Deubel (LMU München, DE)
  • Jorge Manuel Miranda Dias (University of Coimbra, PT)
  • Sven Dickinson (University of Toronto, CA)
  • Jannik Fritsch (Honda Research Europe - Offenbach, DE)
  • Frank Guerin (University of Aberdeen, GB) [dblp]
  • Gregory Hager (Johns Hopkins University - Baltimore, US)
  • Danica Kragic (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
  • Norbert Krüger (University of Southern Denmark - Odense, DK) [dblp]
  • Elmar Mair (TU München, DE) [dblp]
  • Chavdar Papazov (TU München, DE)
  • Justus Piater (University of Liège, BE) [dblp]
  • Aaron Sloman (University of Birmingham, GB) [dblp]
  • Louise Stark (University of the Pacific - Stockton, US)
  • Melanie Sutton (University of West Florida - Pensacola, US)
  • Marko Tscherepanow (Universität Bielefeld, DE)
  • John K. Tsotsos (York University - Toronto, CA) [dblp]
  • Markus Vincze (TU Wien, AT) [dblp]
  • Sven Wachsmuth (Universität Bielefeld, DE)
  • Florentin Wörgötter (Universität Göttingen, DE) [dblp]
  • Jeremy L. Wyatt (University of Birmingham, GB) [dblp]
  • Michael Zillich (TU Wien, AT) [dblp]

Related Seminars
  • Dagstuhl Seminar 06231: Towards Affordance-Based Robot Control (2006-06-05 - 2006-06-09) (Details)

Classification
  • artificial intelligence
  • robotics
  • computer graphics
  • computer vision
  • semantics
  • specification
  • formal methods

Keywords
  • recognition of structure
  • form and shape
  • affordances
  • perception action loop
  • grasping