TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 17391

Deep Learning for Computer Vision

( Sep 24 – Sep 29, 2017 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/17391

Organizers

Contact


Motivation

The paradigm that a machine can learn from examples much like humans learn from experience has fascinated researchers since the advent of computers. It has triggered numerous research developments and gave rise to the concept of artificial neural networks as a computational paradigm designed to mimic aspects of signal and information processing in the human brain.

There have been several key advances in this area including the concept of back- propagation learning (essentially gradient descent and chain rule differentiation on the network weight vectors) by Werbos in 1974, later popularized in the celebrated 1984 paper of Rumelhart, Hinton and Williams. Despite a certain success in pattern recognition challenges like handwritten digit classification, artificial neural networks dropped in popularity in the 1990s with alternative techniques such as support vector machines gaining attention.

With increasing computational power (and in particular highly parallel GPU architectures) and more sophisticated training strategies such as layer-by-layer pretraining, supervised backpropagation and dropout learning, neural networks regained popularity in the 2000s and the 2010s. With deeper network architectures and more training data, their performance has drastically improved. Over the last couple of years they have outperformed numerous existing algorithms on a variety of computer vision challenges such as object recognition, semantic segmentation and even stereo and optical flow estimation.

The aim of this Dagstuhl Seminar is to bring together leading experts from the area of machine learning and computer vision and discuss the state-of-the-art in deep learning for computer vision. During our seminar, we will address a variety of both experimental and theoretical questions such as:

  1. In which types of challenges do deep learning techniques work well?
  2. In which types of challenges do they fail? Are there variations of the network architectures that may enable us to tackle these challenges as well?
  3. Which type of network architectures exist (convolutional networks, recurrent networks, deep belief networks, long short term memory networks, deep Turing machines)? What advantages and drawbacks does each network architecture bring about?
  4. Which aspects are crucial for the practical performance of deep network approaches?
  5. Which theoretical guarantees can be derived for neural network learning?
  6. What properties assure the impressive practical performance despite respective cost functions being generally non-convex?
Copyright Daniel Cremers, Laura Leal-Taixé, Ian Reid, and René Vidal

Summary

The paradigm that a machine can learn from examples much like humans learn from experience has fascinated researchers since the advent of computers. It has triggered numerous research developments and gave rise to the concept of artificial neural networks as a computational paradigm designed to mimic aspects of signal and information processing in the human brain.

There have been several key advances in this area including the concept of back- propagation learning (essentially gradient descent and chain rule differentiation on the network weight vectors) by Werbos in 1974, later popularized in the celebrated 1984 paper of Rumelhart, Hinton and Williams. Despite a certain success in pattern recognition challenges like handwritten digit classification, artificial neural networks dropped in popularity in the 1990s with alternative techniques such as support vector machines gaining attention.

With increasing computational power (and in particular highly parallel GPU architectures) and more sophisticated training strategies such as layer-by-layer pretraining, supervised backpropagation and dropout learning, neural networks regained popularity in the 2000s and the 2010s. With deeper network architectures and more training data, their performance has drastically improved. Over the last couple of years, they have outperformed numerous existing algorithms on a variety of computer vision challenges such as object recognition, semantic segmentation and even stereo and optical flow estimation.

The aim of this Dagstuhl Seminar is to bring together leading experts from the area of machine learning and computer vision and discuss the state-of-the-art in deep learning for computer vision. During our seminar, we will address a variety of both experimental and theoretical questions such as:

  1. In which types of challenges do deep learning techniques work well?
  2. In which types of challenges do they fail? Are there variations of the network architectures that may enable us to tackle these challenges as well?
  3. Which type of network architectures exist (convolutional networks, recurrent networks, deep belief networks, long short-term memory networks, deep Turing machines)? What advantages and drawbacks does each network architecture bring about?
  4. Which aspects are crucial for the practical performance of deep network approaches?
  5. Which theoretical guarantees can be derived for neural network learning?
  6. What properties assure the impressive practical performance despite respective cost functions being generally non-convex?
Copyright Daniel Cremers, Laura Leal-Taixé, Ian Reid, and René Vidal

Participants
  • Joan Bruna Estrach (New York University, US) [dblp]
  • Pratik Chaudhari (UCLA, US) [dblp]
  • Daniel Cremers (TU München, DE) [dblp]
  • Alexey Dosovitskiy (Intel Deutschland GmbH - Feldkirchen, DE) [dblp]
  • Vittorio Ferrari (University of Edinburgh, GB) [dblp]
  • Thomas Frerix (TU München, DE) [dblp]
  • Jürgen Gall (Universität Bonn, DE) [dblp]
  • Silvano Galliani (ETH Zürich, CH) [dblp]
  • Ravi Garg (University of Adelaide, AU) [dblp]
  • Raja Giryes (Tel Aviv University, IL) [dblp]
  • Kristen Grauman (University of Texas - Austin, US) [dblp]
  • Benjamin Haeffele (Johns Hopkins University - Baltimore, US) [dblp]
  • Philip Häusser (TU München, DE) [dblp]
  • Caner Hazirbas (TU München, DE) [dblp]
  • Iasonas Kokkinos (Facebook AI Research - Paris, FR & University College London, GB) [dblp]
  • Hildegard Kühne (Universität Bonn, DE)
  • Christoph H. Lampert (IST Austria - Klosterneuburg, AT) [dblp]
  • Laura Leal-Taixé (TU München, DE) [dblp]
  • Stephane Mallat (Ecole Polytechnique - Palaiseau, FR) [dblp]
  • Michael Möller (Universität Siegen, DE) [dblp]
  • Emanuele Rodolà (Uni. of Lugano, CH & Sapienza Univ - Rome, IT) [dblp]
  • Rahul Sukthankar (Google Research - Mountain View, US) [dblp]
  • Niko Sünderhauf (Queensland University of Technology - Brisbane, AU) [dblp]
  • Anton van den Hengel (University of Adelaide, AU) [dblp]
  • Jan Van Gemert (TU Delft, NL) [dblp]
  • Andrea Vedaldi (University of Oxford, GB) [dblp]
  • René Vidal (Johns Hopkins University - Baltimore, US) [dblp]
  • Christoph Vogel (TU Graz, AT) [dblp]

Classification
  • artificial intelligence / robotics
  • computer graphics / computer vision

Keywords
  • deep learning
  • convolutional networks
  • computer vision
  • machine learning