TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 08041

Recurrent Neural Networks – Models, Capacities, and Applications

( 20. Jan – 25. Jan, 2008 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/08041

Organisatoren





Press Room

Press Review (German only)

Press Release

Wie die Künstliche Intelligenz vom menschlichen Gehirn lernt
15.01.2008 (German only)


Summary

Artificial neural networks (FNNs) constitute one of the most successful machine learning techniques with application areas ranging from industrial tasks up to simulations of biological neural networks. Recurrent neural networks (RNNs) include cyclic connections of the neurons, such that they can incorporate context information or temporal dependencies in a natural way. Spatiotemporal data and context relations occur frequently in robotics, system identification and control, bioinformatics, medical and biomedical data such as EEG and EKG, sensor streams in technical applications, natural speech processing, analysis of text and web documents, etc. At the same time, the amount of available data is rapidly increasing due to dramatically improved technologies for automatic data acquisition and storage and easy accessibility in public databases or the web, such that high-quality analysis tools are essential for automated processing and mining of these data. Since, moreover, spatiotemporal signals and feedback connections are ubiquitous when considering biological neural networks of the human brain, RNNs carry the promise of efficient biologically plausible signal processing models optimally suited for a wide area of industrial applications on the one hand and an explanation of cognitive phenomena of the human brain on the other hand.

However, simple feedforward networks without recurrent connections and with a feature encoding of complex spatiotemporal signals which neglects structural aspects of data are still the preferred model in industrial or scientific applications, disregarding the potential of feedback connections. This is mainly due to the fact that traditional training of RNNs, unlike FNNs and backpropagation, faces severe for RNNs suffers from numerical barriers, a formal learning theory of RNNs in the classical sense of PAC learning does hardly exist, RNNs easily show complex chaotic behavior which is complicated to manage, and the way how humans use recurrence to cope with language, complex symbols, or logical inference is only partially understood.

The aim of the seminar was to bring together researchers who are involved in these different areas and recent advances, in order to further the understanding and development of efficient, biologically plausible recurrent information processing, both in theory and in applications. Although often tackled separately, these aspects, the investigation of cognitive models, the design of efficient training models, and the integration of symbolic systems into RNNs, severely influence each other, and they need to be integrated to achieve optimum models, algorithmic design, and theoretical background.

Overall the presentations and discussions revealed that RNNs constitute a highly diverse and evolving field which opens interesting perspectives to machine learning for structures in the broadest sense. It is still waits with quite a few open problems for researchers, a central problem being efficient and robust learning methods for challenging domains.


Teilnehmer
  • Sebastian Bader (Universität Rostock, DE)
  • Howard Blair (Syracuse University, US)
  • Hendrik Blockeel (KU Leuven, BE) [dblp]
  • Jehoshua (Shuki) Bruck (CalTech - Pasadena, US)
  • Matthew Cook (ETH Zürich, CH) [dblp]
  • Artur d'Avila Garcez (City University - London, GB) [dblp]
  • Luc De Raedt (KU Leuven, BE) [dblp]
  • Ulrich Egert (Universität Freiburg, DE)
  • Dov M. Gabbay (King's College London, GB) [dblp]
  • Marco Gori (University of Siena, IT) [dblp]
  • Tayfun Gürel (Universität Freiburg, DE)
  • Barbara Hammer (Universität Bielefeld, DE) [dblp]
  • Stefan Häusler (TU Graz, AT)
  • Ivan Herreros-Alonso (Univ. Pompeu Fabra - Barcelona, ES)
  • Pascal Hitzler (Wright State University - Dayton, US) [dblp]
  • Steffen Hölldobler (TU Dresden, DE) [dblp]
  • Kristian Kersting (Fraunhofer IAIS - St. Augustin, DE) [dblp]
  • Kai-Uwe Kühnberger (Universität Osnabrück, DE) [dblp]
  • Luis C. Lamb (Federal University of Rio Grande do Sul, BR) [dblp]
  • Wolfgang Maass (TU Graz, AT) [dblp]
  • Günther Palm (Universität Ulm, DE) [dblp]
  • Stefan Rotter (IGPP Freiburg, DE)
  • Jürgen Schmidhuber (IDSIA - Manno, CH) [dblp]
  • Jochen J. Steil (Universität Bielefeld, DE)
  • Peter Tino (University of Birmingham, GB) [dblp]
  • Frank Van der Velde (Leiden University, NL) [dblp]
  • Heiko Wersing (Honda Research Europe - Offenbach, DE)
  • Gerson Zaverucha (Federal University of Rio de Janeiro, BR)
  • Hans-Georg Zimmermann (Siemens AG - München, DE)

Klassifikation
  • artificial intelligence / robotics
  • soft computing / evol. algorithms
  • cognitive science

Schlagworte
  • Recurrent neural networks
  • liquid state machine
  • echo state machine
  • dynamic systems
  • speech processing
  • neurobiology
  • neurosymbolic integration