TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 19152

Emerging Hardware Techniques and EDA Methodologies for Neuromorphic Computing

( 07. Apr – 10. Apr, 2019 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/19152

Organisatoren

Kontakt


Motivation

Arguably the most exciting advancement in Artificial Intelligence in the past decade is the wide application of deep learning systems. Cultivated from conventional neural network and machine learning algorithms, deep learning introduces multiple layers with complex structures or multiple nonlinear transformations to model a high-level abstraction of the data. The ability to learn tasks from examples makes deep learning particularly attractive to cognitive applications such as image and speech recognition, object detection, natural language processing, etc. Nonetheless, the rapidly growing speed of the learning model size in state-of-the-art applications far exceeds the improvements of microprocessor computing capacity and computer cluster size. In addition, it has been widely accepted that conventional computing paradigms will not be scalable for these machine intelligence applications due to quickly increased energy consumption and hardware cost. This worry motivated active research on new or alternative computing architectures.

Neuromorphic computing systems, that refer to the computing architecture inspired by the working mechanism of human brains, have gained considerable attention. The human neocortex system naturally possesses a massively parallel architecture with closely coupled memory and computing as well as unique analog domain operations. The simple unified building blocks (i.e., neurons) follow integrate-and-fire mechanisms, leading to an ultra-high computing performance beyond 100 TFLOPs (Trillion FLoating-point Operations Per Second) and a power consumption of mere 20 Watt. By imitating such structure, neuromorphic computing systems are anticipated to be superior to conventional computer systems across various application areas. In the past few years, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. Examples include IBM’s TrueNorth chip, Intel’s Loihi chip, the SpiNNaker machine of the EU Human Brain Project, the BrainScaleS neuromorphic system developed at the University of Heidelberg, etc.

To enable large-scale neuromorphic computing systems with real-time learning capability, design methodologies for effective and efficient functionality verification, robustness evaluation and chip testing/debugging become essential and important. Hardware innovation and electronic design automation (EDA) tools are required to realize energy-efficient and reliable hardware for machine intelligence on cloud servers for extremely high performance as well as edge devices with severe power and area constraints.

The goal of the Dagstuhl Seminar is to bring together experts in order to present and to develop new ideas and concepts for design methodologies and EDA tools for neuromorphic computing systems. Topics to be discussed include but are not limited to the following:

  • Design methodology for neuromorphic computing systems
  • EDA methodologies and tools
  • Architecture designs such as custom temporal parallel approach and dataflow designs
  • Implementations of synaptic plasticity, short-term adaptation and homeostatic mechanisms
  • Neuromorphic system integration and demonstration
  • Fault tolerance and reliability
  • Testing and debugging for neuromorphic systems

As possible results, we expect to see a better understanding of the respective areas, new impulses for further research directions, and ideas for areas that will heavily influence research in the domain of hardware and design automation for neuromorphic computing systems within the next years. The seminar will facilitate greater interdisciplinary interactions between researchers in neuroscience, chip designers, system architects, device engineers, and computer scientists.

Copyright Krishnendu Chakrabarty, Tsung-Yi Ho, Hai Li, and Ulf Schlichtmann

Summary

The explosion of big data applications imposes severe challenges of data processing speed and scalability on traditional computer systems. However, the performance of von Neumann architecture is greatly hindered by the increasing performance gap between CPU and memory, motivating active research on new or alternative computing architectures. Neuromorphic computing systems, that refer to the computing architecture inspired by the working mechanism of human brains, have gained considerable attention. The human neocortex system naturally possesses a massively parallel architecture with closely coupled memory and computing as well as unique analog domain operations. By imitating this structure, neuromorphic computing systems are anticipated to be superior to conventional computer systems across various application areas. In the past few years, extensive research studies have been performed on developing large-scale neuromorphic systems. Examples include IBM's TrueNorth chip, the SpiNNaker machine of the EU Human Brain Project, the BrainScaleS neuromorphic system developed at the University of Heidelberg, Intel's Loihi etc. These attempts still fall short of our expectation on energy-efficient neuromorphic computing systems with online, real-time learning and inference capability. The bottlenecks of computation requirements, memory latency, and communication overhead continue to be showstoppers. Moreover, there is a lack of support in design automation of neuromorphic systems, including functionality verification, robustness evaluation and chip testing and debugging. Hardware innovation and electronic design automation (EDA) tools are required to enable energy-efficient and reliable hardware implementation for machine intelligence on cloud servers for extremely high performance as well as edge devices with severe power and area constraints.

The goal of the seminar was to bring together experts from different areas in order to present and to develop new ideas and concepts for emerging hardware techniques and EDA methodologies for neuromorphic computing. Topics that were discussed included:

  • Neuroscience basics
  • Physical fundamentals
  • New devices and device modeling
  • Circuit design and logic synthesis
  • Architectural innovations
  • Neurosynaptic processor and system integration
  • Design automation techniques
  • Simulation and emulation of neuromorphic systems
  • Reliability and robustness
  • Efficiency and scalability
  • Hardware/software co-design
  • Applications

The seminar facilitated greater interdisciplinary interactions between physicists, chip designers, architects, system engineers, and computer scientists. High-quality presentations and lively discussions were ensured by inviting carefully selected experts who participated in the seminar. All of them have established stellar reputations in the respective domains. As a result, we developed a better understanding of the respective areas, generated impetus for new research directions, and ideas for areas that will heavily influence research in the domain of neuromorphic design over the next years.

At the end of the seminar, we identified the following four areas as being among the most important topics for future research: computing-in-memory, brain-inspired design and architecture, new technologies and devices, and reliability and robustness. These research topics are certainly not restricted to and cannot be solved within one single domain. It is therefore imperative to foster interactions and collaborations across different areas.

Copyright Hai Li

Teilnehmer
  • Krishnendu Chakrabarty (Duke University - Durham, US) [dblp]
  • Meng-Fan Chang (National Tsing Hua University - Hsinchu, TW) [dblp]
  • Jian-Jia Chen (TU Dortmund, DE) [dblp]
  • Yiran Chen (Duke University - Durham, US) [dblp]
  • Federico Corradi (Stichting IMEC Nederland - Eindhoven, NL) [dblp]
  • Rolf Drechsler (Universität Bremen, DE) [dblp]
  • Deliang Fan (University of Central Florida - Orlando, US) [dblp]
  • Tsung-Yi Ho (National Tsing Hua University - Hsinchu, TW) [dblp]
  • Alex Pappachen James (Nazarbayev University, KZ) [dblp]
  • Bing Li (TU München, DE) [dblp]
  • Hai Li (Duke University - Durham, US) [dblp]
  • Darsen Lu (National Cheng Kung University - Tainan, TW)
  • Christoph Maier (University of California - San Diego, US) [dblp]
  • Onur Mutlu (ETH Zürich, CH) [dblp]
  • Qinru Qiu (Syracuse University, US) [dblp]
  • Garrett S. Rose (University of Tennessee, US) [dblp]
  • Yulia Sandamirskaya (Universität Zürich, CH) [dblp]
  • Johannes Schemmel (Universität Heidelberg, DE) [dblp]
  • Ulf Schlichtmann (TU München, DE) [dblp]
  • Yu Wang (Tsinghua University - Beijing, CN) [dblp]
  • Chia-Lin Yang (National Taiwan University - Taipei, TW) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • hardware
  • modelling / simulation

Schlagworte
  • Neuromorphic computing
  • nanotechnology
  • hardware design
  • electronic design automation
  • reliability and robustness