Scalable Learning in Recurrent Spiking Neural Networks without Backpropagation
A recent paper on arXiv introduces a multi-layer recurrent spiking neural network (SNN) architecture designed for scalable supervised learning without relying on backpropagation or surrogate gradients. This structure features locally dense recurrent layers enhanced by sparse long-range projections to a readout population. To maintain routing efficiency and hardware scalability, long-range connections remain mostly fixed, while synaptic adaptation employs local plasticity mechanisms. The learning framework integrates output layer population-based winner-take-all teaching signals, fixed random broadcast alignment feedback pathways, and low-dimensional modulatory neurons. This innovative method tackles the issue of scalable learning in deep recurrent SNNs with sparse connections, providing a biologically inspired alternative to backpropagation.
Key facts
- Paper published on arXiv with ID 2605.00402
- Proposes structured multi-layer recurrent SNN architecture
- Uses locally dense recurrent layers with sparse small-world long-range projections
- Long-range connectivity is largely fixed
- Synaptic adaptation uses strictly local plasticity mechanisms
- Learning framework includes winner-take-all teaching signals
- Uses fixed random broadcast alignment feedback pathways
- Includes low-dimensional modulatory neurons
Entities
Institutions
- arXiv