

Indeed, spikes have often proved quite a nuisance when moving insights from rate networks to spiking networks. We will here focus on unsupervised learning. We note that supervised learning in neural networks faces similar problems, and recent work has sought to address these issues. In turn, the derivation of learning rules under locality constraints has often relied on heuristics or approximations, although more recent work has shown progress in this area. Indeed, synapses have usually only access to pre- and postsynaptic information, but most unsupervised learning rules derived in rate networks use omniscient synapses that can pool information from across the network. The first challenge comes from locality constraints. When transferring the insights gained in these simplified rate networks to more realistic, biological networks, two key challenges have been encountered. įor sensory systems, the efficient coding hypothesis has provided a useful guiding principle, which has been successfully applied to the problem of unsupervised learning in feedforward networks. However, it has been much harder to understand how such population codes can emerge in spiking neural networks through learning of synaptic connectivities. A lot of work has provided pivotal insights into the nature of the resulting population codes, and their generation through the internal dynamics of neural networks. Many neural systems encode information by distributing it across the activities of large populations of spiking neurons. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Ĭompeting interests: The authors have declared that no competing interests exist. įunding: This work was funded by the James McDonnell Foundation Award, EU grants BACS FP6-IST- 027140, BIND MECT-CT-20095–024831, and ERC FP7-PREDSPIKE to SD, and the Emmy-Noether grant of the Deutsche Forschungsgemeinschaft (Germany) and a Chaire d’excellence of the Agence National de la Recherche (France) to CKM and an FCT scholarship (PD/BD/105944/2014 Ref. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.ĭata Availability: The data that support the findings of this study are openly available in the GitHub repository at. Received: JAccepted: JanuPublished: March 16, 2020Ĭopyright: © 2020 Brendel et al. Gershman, Harvard University, UNITED STATES

After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.Ĭitation: Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S (2020) Learning to represent signals spike by spike. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. In this study, we show how the required architecture can be learnt. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex.

Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values.
