January 20 – 25 , 2008, Dagstuhl Seminar 08041
Recurrent Neural Networks - Models, Capacities, and Applications
For support, please contact
Press Review (German only)
Wie die künstliche Intelligenz vom menschlichen Gehirn lernt
Interview Wolfgang Back vom Computerclub Zwei mit Prof. Dr. Barbara Hammer (TU Clausthal); Folge 87, 28.01.08;
- Download 32 Kbit/s (~7 MB)
Wie die Künstliche Intelligenz vom menschlichen Gehirn lernt
15.01.2008 (German only)
Artificial neural networks (FNNs) constitute one of the most successful machine learning techniques with application areas ranging from industrial tasks up to simulations of biological neural networks. Recurrent neural networks (RNNs) include cyclic connections of the neurons, such that they can incorporate context information or temporal dependencies in a natural way. Spatiotemporal data and context relations occur frequently in robotics, system identification and control, bioinformatics, medical and biomedical data such as EEG and EKG, sensor streams in technical applications, natural speech processing, analysis of text and web documents, etc. At the same time, the amount of available data is rapidly increasing due to dramatically improved technologies for automatic data acquisition and storage and easy accessibility in public databases or the web, such that high-quality analysis tools are essential for automated processing and mining of these data. Since, moreover, spatiotemporal signals and feedback connections are ubiquitous when considering biological neural networks of the human brain, RNNs carry the promise of efficient biologically plausible signal processing models optimally suited for a wide area of industrial applications on the one hand and an explanation of cognitive phenomena of the human brain on the other hand.
However, simple feedforward networks without recurrent connections and with a feature encoding of complex spatiotemporal signals which neglects structural aspects of data are still the preferred model in industrial or scientific applications, disregarding the potential of feedback connections. This is mainly due to the fact that traditional training of RNNs, unlike FNNs and backpropagation, faces severe for RNNs suffers from numerical barriers, a formal learning theory of RNNs in the classical sense of PAC learning does hardly exist, RNNs easily show complex chaotic behavior which is complicated to manage, and the way how humans use recurrence to cope with language, complex symbols, or logical inference is only partially understood.
The aim of the seminar was to bring together researchers who are involved in these different areas and recent advances, in order to further the understanding and development of efficient, biologically plausible recurrent information processing, both in theory and in applications. Although often tackled separately, these aspects, the investigation of cognitive models, the design of efficient training models, and the integration of symbolic systems into RNNs, severely influence each other, and they need to be integrated to achieve optimum models, algorithmic design, and theoretical background.
Overall the presentations and discussions revealed that RNNs constitute a highly diverse and evolving field which opens interesting perspectives to machine learning for structures in the broadest sense. It is still waits with quite a few open problems for researchers, a central problem being efficient and robust learning methods for challenging domains.
- Artificial Intelligence / Robotics
- Soft Computing / Evol. Algorithms
- Cognitive Science
- Recurrent neural networks
- Liquid state machine
- Echo state machine
- Dynamic systems
- Speech processing
- Neurosymbolic integration