04.10.15 - 09.10.15, Seminar 15411

Multimodal Manipulation Under Uncertainty

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.

Motivation

While robots have been used for decades to perform highly specialized tasks in engineered environments, robotic manipulation is still crude and clumsy in settings not specifically designed for robots. There is a huge gap between human and robot capabilities, including actuation, perception, and reasoning. However, recent developments such as low-cost manipulators and sensing technologies place the field in a good position to make progress on robot manipulation in unstructured environments. Various techniques are emerging for computing or inferring grasp configurations based on object identity, shape, or appearance, using simple grippers and robot hands.

Beyond grasping, a key ingredient of sophisticated manipulation is the management of state information and its uncertainty. One approach to handling uncertainty is to develop grasping and manipulation skills that are robust to environmental variation. Another approach is to develop methods of interacting with the environment in order to gain task-relevant information, for example, by touching, pushing, changing viewpoint, etc. Managing state information and uncertainty will require a tight combination of perception and planning. When the sensor evidence is unambiguous, the robot needs to be able to recognize that and perform the task accurately and efficiently. When greater uncertainty is present, the robot needs to adjust its actions so that they will succeed in the worst case or it needs to gain additional information in order to improve its situation. Different sensing modalities as well as world models can often be combined to good effect due to their complementary properties.

This Seminar seeks to formulate research questions and agendas in order to accelerate progress towards robust manipulation under uncertainty, including topics such as the following:

  • How to synthesize robust grasping/manipulation actions or behaviors in the presence of state uncertainty?
  • How to integrate multi-modal information from vision, range, and tactile sensors into a single consistent state estimate?
  • How to reduce state uncertainty by synthesizing specific sensing actions or planning information-gathering activities?
  • How to synthesize robust, reactive manipulation in the presence of distractors such as other agents or clutter?
  • How to use nonprehensile, physics-based strategies to robustly grasp and manipulate in clutter?

Multi-modal manipulation under uncertainty is a broad research agenda that requires strong synergy between theoreticians and practitioners of robotic manipulation, motion planning, control, perception, and machine learning. Our objective is to bring together key contributors in each of these fields and cultivate discussion and collaboration in order to create synergies and accelerate progress. Results of the Seminar will include

  • a characterization of concrete capabilities of manipulation under uncertainty that are needed for high-level applications such as flexible manufacturing, service or household robotics;
  • a clear understanding of the state of the art in these areas, including solved problems and open questions;
  • concrete research questions whose solution will move the field significantly forward, as well as research agendas designed to address them.