TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 15461

Vision for Autonomous Vehicles and Probes

( 08. Nov – 13. Nov, 2015 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/15461

Organisatoren

Kontakt

Dagstuhl Seminar Wiki

Gemeinsame Dokumente


Programm

Motivation

The vision-based autonomous driving and navigation of vehicles has a long history. In 2013, Daimler succeeded autonomous driving on a public drive way. Today, the Curiosity mars rover is sending video views from Mars to Earth.

Computer vision plays a key role in advanced driver-assistance systems (ADAS) as well as in exploratory and service robotics. Visual odometry, trajectory planning for Mars exploratory rovers and the recognition of scientific targets in images are examples of successful applications. In addition, new computer vision theories focus on supporting autonomous driving and navigation as applications to unmanned aerial vehicles (UAVs) and underwater robots.

From the viewpoint of geometrical methods for autonomous driving, navigation and exploration, the on-board calibration of multiple cameras, SLAM in open environments, long-time and large-scale autonomy and the processing of non-classical features are some current problems. Furthermore, the adaptation of algorithms to long image sequences, image pairs with large displacements and image sequences with changing illumination is desired for robust navigation and exploration. Visual SLAM is an essential technique for long-time and large-scale autonomy. In visual SLAM, a map of an environment is generated using SfM technique in computer vision. Map building in an urban environment achieved by SLAM of autonomous vehicles is a fundamental problem in autonomous driving. Furthermore, the generation of a surface map of a planet using autonomous probes is desired from planet geology. And the generation of forest and field maps using UAV is used in environment planning. These requirements ask us to reconstruct the environment and generate maps using non-traditional imaging systems such as wide-view cameras and fusion of multiple spectral images. Moreover, the extraction of non-verbal and graphical information from environments is desired for remote driver-assistance system.

Machine learning is a promising tool for estimating and acquiring parameters to describe environments in which vehicles and robots drive. From the viewpoint of machine learning and artificial intelligence, an autonomous explorer, however, may move in a new environment without any pre-observed data. Therefore, for the application of machine learning of image-based data to autonomous explorers and vehicles, it is desirable to use a method of learning from a small number of samples for large amounts of data and to develop validation methodologies without ground truths for remote exploration. Since the small-number nature of samples and evaluation without ground truth contradicts the traditional assumption of estimation from large-number samples, the establishment of a learning methodology for autonomous cars and exploration probes is desired.

Although the imaging system is assumed to be on-vehicle for the autonomous driving of cars, micro-UAVs use both – on-vehicle and off-vehicle vision systems. Micro-UAVs without on-vehicle vision systems are controlled using a visual-servo system in the robot workspace. For the visual servoing of micro-UAVs, the classical computer vision methodology should be converted into real-time online algorithms.

For the development of real-time online algorithms for computer vision, classical methods should be converted into accurate fast parallel versions with small memory sizes. Domain-specific descriptions of the problems in computer vision will establish this conversion, although traditional computer vision algorithms have been used to deal with general models for reconstruction, scene understanding, motion analysis and so forth.

This mathematical reinterpretation of the problems in computer vision will also open new ideas to autonomous driving of cars, the navigation of space probes, visual remote exploration and service robotics. For these new interpretations and descriptions, we need a common discussion forum across computer vision researchers in the fields of autonomous driving, navigation, remote exploration and service robotics.


Summary

Computer vision plays a key role in advanced driver assistance systems (ADAS) as well as in exploratory and service robotics. Visual odometry, trajectory planning for Mars exploratory rovers and the recognition of scientific targets in images are examples of successful applications. In addition, new computer vision theory focuses on supporting autonomous driving and navigation as applications to unmanned aerial vehicles (UAVs) and underwater robots. From the viewpoint of geometrical methods for autonomous driving, navigation and exploration, the on-board calibration of multiple cameras, simultaneous localisation and mapping (SLAM) in non-human-made environments and the processing of non-classical features are some of current problems. Furthermore, the adaptation of algorithms to long image sequences, image pairs with large displacements and image sequences with changing illumination is desired for robust navigation and exploration. Moreover, the extraction of non-verbal and graphical information from environments to remote driver assistance is required.

Based on these wide range of theoretical interests from computer vision for new possibility of practical applications of computer vision and robotics, 38 participants (excluding organisers) attended from variety of countries: 4 from Australia, 3 from Austria, 3 from Canada, 1 from Denmark, 11 from Germany, 1 from Greece, 1 from France, 3 from Japan, 4 from Spain, 2 from Sweden, 4 from Switzerland and 3 from the US.

The seminar was workshop style. The talks are 40 mins and 30 mins for young researchers and for presenters in special sessions. The talks have been separated into sessions on aerial vehicle vision, under water and space vision, map building, three-dimensional scene and motion understanding as well as a dedicated session on robotics. In these tasks, various types of autonomous systems such as autonomous aerial vehicles, under water robots, field and space probes for remote exploration and autonomous driving cars were presented. Moreover, applications of state-of-the-art computer vision techniques such as global optimization methods, deep learning approaches as well as geometrical methods for scene reconstruction and understanding were discussed. Finally, with Seminar 15462 a joint session on autonomous driving with leading experts in the field was organised.

The working groups are focused on "Sensing", "Interpretation and Map building", and "Deep leaning". Sensing requires fundamental methodologies in computer vision. Low-level sensing is a traditional problem in computer vision. For applications of computer-vision algorithms to autonomous vehicles and probes, reformulation of problems for various conditions are required. Map building is a growing area including applications to autonomous robotics and urban computer vision. Today, application to autonomous map generation involves classical SLAM and large-scale reconstruction from indoor to urban sizes. Furthermore, for SLAM on-board and on–line computation is required. Deep learning, which goes back its origin to '70s, is a fundamental tool for image pattern recognition and classification. Although the method showed significant progress in image pattern recognition and discrimination, for applications to spatial recognition and three-dimensional scene understanding, we need detailed discussion and developments.

Through talks-and-discussion and working-group discussion, the seminar clarified that for designing of platforms for visual interpretation and understanding of three-dimensional world around the system, machine vision provides fundamental and essential methodologies. There is the other methodology which uses computer vision as a sensing system for the acquisition of geometrical data and analysis of motion around cars. For these visual servo systems, computer vision is a part of the platform for intelligent visual servo system. The former methodology is a promising one to provide a fundamental platform which is common to both autonomous vehicles, which are desired for consumer intelligence, and probes, which are used for remote exploration.

Copyright Andrés Bruhn and Atsushi Imiya

Teilnehmer
  • José M. Alvarez (NICTA - Canberra, AU) [dblp]
  • Juan Andrade-Cetto (UPC - Barcelona, ES) [dblp]
  • Steven S. Beauchemin (University of Western Ontario - London, CA) [dblp]
  • Florian Becker (Sony - Stuttgart, DE) [dblp]
  • Sven Behnke (Universität Bonn, DE) [dblp]
  • Johannes Berger (Universität Heidelberg, DE) [dblp]
  • Andrés Bruhn (Universität Stuttgart, DE) [dblp]
  • Darius Burschka (TU München, DE) [dblp]
  • Daniel Cremers (TU München, DE) [dblp]
  • Krzysztof Czarnecki (University of Waterloo, CA) [dblp]
  • Cédric Demonceaux (University of Bourgogne, FR) [dblp]
  • Michael Felsberg (Linköping University, SE) [dblp]
  • Friedrich Fraundorfer (TU Graz, AT) [dblp]
  • Yasutaka Furukawa (Washington University - St. Louis, US) [dblp]
  • Rafael Garcia (University of Girona, ES) [dblp]
  • Antonios Gasteratos (Democritus University of Thrace - Xanthi, GR) [dblp]
  • Andreas Geiger (MPI für Intelligente Systeme - Tübingen, DE) [dblp]
  • Michal Havlena (ETH Zürich, CH) [dblp]
  • Heiko Hirschmüller (Roboception GmbH - München, DE) [dblp]
  • Ben Huber (Joanneum Research - Graz, AT) [dblp]
  • Atsushi Imiya (Chiba University, JP) [dblp]
  • Reinhard Koch (Universität Kiel, DE) [dblp]
  • Takashi Kubota (ISAS/JAXA, JP) [dblp]
  • Lazaros Nalpantidis (Aalborg University Copenhagen, DK) [dblp]
  • Thomas Pock (TU Graz, AT) [dblp]
  • Danil V. Prokhorov (Toyota Research Institute North America- Ann Arbor, US) [dblp]
  • Sebastian Ramos (Daimler AG - Böblingen, DE) [dblp]
  • Hayko Riemenschneider (ETH - Zürich, CH) [dblp]
  • Torsten Sattler (ETH Zürich, CH) [dblp]
  • Davide Scaramuzza (Universität Zürich, CH) [dblp]
  • Bernt Schiele (MPI für Informatik - Saarbrücken, DE) [dblp]
  • Jürgen Sturm (Google - München, DE) [dblp]
  • Niko Sünderhauf (Queensland University of Technology - Brisbane, AU) [dblp]
  • Akihiko Torii (Tokyo Inst. of Technology, JP) [dblp]
  • Raquel Urtasun (University of Toronto, CA) [dblp]
  • Vladyslav Usenko (TU München, DE) [dblp]
  • David Vázquez Bermudez (Autonomus University of Barcelona, ES) [dblp]
  • Andreas Wendel (Google Inc. - Mountain View, US) [dblp]
  • Christian Winkens (Universität Koblenz-Landau, DE) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • computer graphics / computer vision

Schlagworte
  • Vision-based autonomous driving and navigation
  • Exploratory rovers
  • Dynamic 3D scene understanding
  • Simultaneous localization and mapping
  • On-board algorithms