08.11.15 - 13.11.15, Seminar 15461

Vision for Autonomous Vehicles and Probes

The following text appeared on our web pages prior to the seminar, and was included as part of the invitation.


The vision-based autonomous driving and navigation of vehicles has a long history. In 2013, Daimler succeeded autonomous driving on a public drive way. Today, the Curiosity mars rover is sending video views from Mars to Earth.

Computer vision plays a key role in advanced driver-assistance systems (ADAS) as well as in exploratory and service robotics. Visual odometry, trajectory planning for Mars exploratory rovers and the recognition of scientific targets in images are examples of successful applications. In addition, new computer vision theories focus on supporting autonomous driving and navigation as applications to unmanned aerial vehicles (UAVs) and underwater robots.

From the viewpoint of geometrical methods for autonomous driving, navigation and exploration, the on-board calibration of multiple cameras, SLAM in open environments, long-time and large-scale autonomy and the processing of non-classical features are some current problems. Furthermore, the adaptation of algorithms to long image sequences, image pairs with large displacements and image sequences with changing illumination is desired for robust navigation and exploration. Visual SLAM is an essential technique for long-time and large-scale autonomy. In visual SLAM, a map of an environment is generated using SfM technique in computer vision. Map building in an urban environment achieved by SLAM of autonomous vehicles is a fundamental problem in autonomous driving. Furthermore, the generation of a surface map of a planet using autonomous probes is desired from planet geology. And the generation of forest and field maps using UAV is used in environment planning. These requirements ask us to reconstruct the environment and generate maps using non-traditional imaging systems such as wide-view cameras and fusion of multiple spectral images. Moreover, the extraction of non-verbal and graphical information from environments is desired for remote driver-assistance system.

Machine learning is a promising tool for estimating and acquiring parameters to describe environments in which vehicles and robots drive. From the viewpoint of machine learning and artificial intelligence, an autonomous explorer, however, may move in a new environment without any pre-observed data. Therefore, for the application of machine learning of image-based data to autonomous explorers and vehicles, it is desirable to use a method of learning from a small number of samples for large amounts of data and to develop validation methodologies without ground truths for remote exploration. Since the small-number nature of samples and evaluation without ground truth contradicts the traditional assumption of estimation from large-number samples, the establishment of a learning methodology for autonomous cars and exploration probes is desired.

Although the imaging system is assumed to be on-vehicle for the autonomous driving of cars, micro-UAVs use both – on-vehicle and off-vehicle vision systems. Micro-UAVs without on-vehicle vision systems are controlled using a visual-servo system in the robot workspace. For the visual servoing of micro-UAVs, the classical computer vision methodology should be converted into real-time online algorithms.

For the development of real-time online algorithms for computer vision, classical methods should be converted into accurate fast parallel versions with small memory sizes. Domain-specific descriptions of the problems in computer vision will establish this conversion, although traditional computer vision algorithms have been used to deal with general models for reconstruction, scene understanding, motion analysis and so forth.

This mathematical reinterpretation of the problems in computer vision will also open new ideas to autonomous driving of cars, the navigation of space probes, visual remote exploration and service robotics. For these new interpretations and descriptions, we need a common discussion forum across computer vision researchers in the fields of autonomous driving, navigation, remote exploration and service robotics.