Learning to Learn in Spiking Neural Networks
Kirchhoff Institute for Physics, University of Heidelberg (Germany)
Local Project ID:
HPC Platform used:
JUWELS of JSC
While impressive progress has been made in machine learning enabling (super-) human performance of non-spiking artificial neural networks, a critical challenge for machine learning is the large amount of labeled data required for training. In comparison, biological learning often involves few to single-shot training. Essentially, the hypothesis exists that biological systems have structural constraints that allow for fast adaptation and learning in specific domains of knowledge, ranging from visual recognition of simple patterns to complex language learning tasks — phenomena dependent on pre-existing brain structures that constrain the learning process.
A substantial progress can be achieved applying computing-intense learning-to-learn or meta-learning methods that employ evolutionary processes to perform an automated search and optimization of large numbers of hyperparameters — these computing-intense methods for training are an ideal use case for efficient usage of High Performance Computing (HPC).
This project investigated new high-throughput methods across a variety of domains for biologically inspired spiking neuronal networks. The different sub-projects explored a variety of HPC tools and learning algorithms to identify optimal methods to automatically create improved parameters for machine learning algorithms using L2L methods. Different techniques were applied to study and enhance learning performance in biological neural networks and to equip variants of data driven models with fast learning capabilities. Most of the training techniques currently used in artificial neural networks are not directly transferable to biological neural networks and have been developed for performance disregarding their biological background. Within this project different algorithms were developed and used for biologically inspired Artificial Intelligence (AI) which aim at reproducing mechanisms present in the brain and achieve high performance. The project also included applications of these learning techniques in neuromorphic hardware and design for their future application in neurorobotics.
This project was carried out using the JUWELS supercomputer at the Jülich Supercomputing Centre. Given the variety of software and techniques explored, it was necessary to make use of both CPUs and GPUs to address the different questions posed.
This large collaborative effort led to thirteen publications, two conference presentations and one magazine article within two years. Five completed PhD thesis made use of the computing resources in this project and another one is about to be finished.
The work performed within this project is useful to the neuroscience community in the areas of simulation, information coding, learning, plasticity, and neuromorphic hardware development. The AI community can also hopefully benefit from the new knowledge exposed and the new algorithms developed with inspiration from neuroscience.
Publications with the appropriate acknowledgement
1. van Albada SJ, Korcsak-Gorzo A, Yegenoglu A, Klijn W, van Meegen A, Diaz-Pier S, Peyser A. Spiking neural networks that learn to learn. Bernstein Feature on Artificial Intelligence and Machine Learning, July 2019.
1. van Albada SJ (2018) “Learning to learn in data-based models of cortex,” 6th Annual Human Brain Project Summit (plenary), Maastricht, the Netherlands
2. Korcsak-Gorzo A, van Albada SJ (2018) “Learning to learn in data-based models of cortex”, session ‘Learning to learn’, 6th Annual Human Brain Project Summit, Maastricht, the Netherlands
1. [Bellec et. al. 2019], ”Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets” 
2. [Bellec et. al. 2018], ”Long short-term memory and learning-to-learn in networks of spiking neurons” 
3. [Stoeckl et. al., 2020], C. Stoeckl and W. Maass. Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes. arXiv:2002.00860v4, 2020. In press at Nature Machine Intelligence
4. [Bellec et. al., 2020], G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, and W. Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11:3625, 2020
5. [Scherr et. al., 2020], F. Scherr, C. Stoeckl, and W. Maass. One-shot learning with spiking neural networks. bioRxiv, 2020
6. [Salaj et. al., 2020], D. Salaj, A. Subramoney, C. Kraisnikovic, G. Bellec, R. Legenstein, and W. Maass. Spike-frequency adaptation supports network computations on temporally dispersed information. bioRxiv, 2020
7. [Subramoney et. al., 2021a], A. Subramoney, F. Scherr, and W. Maass. Reservoirs learn to learn. In Reservoir Computing: Theory, Physical Implementations, and Applications, K. Nakajima and I. Fischer, editors. Springer, 2021. Draft on arXiv:1909.07486v1
8. [Subramoney et. al., 2021b], Subramoney, A., Bellec, G., Scherr, F., Legenstein, R., & Maass, W. (2021). Revisiting the role of synaptic plasticity and network dynamics for fast learning in spiking neural networks. bioRxiv.
9. [Bennett et. al. 2020], James E. M. Bennett, Andy Philippides and Thomas Nowotny, Learning with reinforcement prediction errors in a model of the Drosophila mushroom body, bioRxiv, DOI: 10.1101/776401, accepted at Nature Communications.
10. [Jordan et. al. 2020], Jordan, J., Schmidt, M., Senn, W., & Petrovici, M. A. (2020). Evolving to learn: discovering interpretable plasticity rules for spiking networks. arXiv preprint arXiv:2005.14149.
1. Christian Pehle, Gerd Kiene, Sebastian Billaudelle, Korbinian Schreiber, Sebastian Schmitt and Johannes Schemmel. “Stochastic Computing with a Neuromorphic System”. In Preparation.
2. Thanos Manos, Sandra Diaz-Pier and Peter A. Tass. “Long-term desynchronization by coordinated reset stimulation in a neural network model with synaptic and structural plasticity“. In Preparation.
3. James Bennett, Garibaldi Pineda, James Knight, Luca Manneschi, Eleni Vasilaki, Paul Graham, Andrew Philippides and Thomas Nowotny, ”Learning and forgetting from the perspective of control” In Preparation.
Principal Investigator: Dr. Johannes Schemmel, Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
Project contributors: W. Maass1 , R. A. Legenstein1 , G. Bellec1, A. Subramoney1, T. Bohnstingl1, A. Rao1, D. Salaj1, F. Scherr1, J. Bennett2, J. Knight2, T. Nowotny2, C. Pehle3, S. Schmitt3, E. Mueller3, K. Meier3, J. Jordan4, M. Schmidt5, M. Petrovici4, W. Senn4, S. Diaz-Pier6, W. Klijn6, A. Yegenoglu6, A. Peyser6, A. Morrison6,7,8, A. van Meegen7, A. Korcsak-Gorzo7, M. Diesmann7, S. van Albada7
1 Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
2 Centre for Computational Neuroscience and Robotics, Department of Informatics, University of Sussex, Brighton, UK
3 Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
4 Department of Physiology, University of Bern, Bern, Switzerland
5 Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Tokyo, Japan
6 Forschungszentrum Jülich GmbH, Institute for Advanced Simulation, Jülich Supercomputing Centre (JSC), SimLab Neuroscience, JARA, 52425 Jülich, Germany
7 Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience & Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, JARA, Jülich Research Center, Jülich, Germany
8 Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
9 FZI Research Center for Information Technology, Karlsruhe, Germany
10 Technical University of Munich, Munich, Germany
Dr. Johannes Schemmel
Kirchhoff Institute for Physics (KIP)
University of Heidelberg
Im Neuenheimer Feld 227, D-69120 Heidelberg (Germany)
e-mail: schemmel [@] kip.uni-heidelberg.de
Local project ID: chhd34