Gradient-Free Continual Learning Method Developed for Spiking Neural Networks
A new gradient-free synaptic importance metric called ISI-CV has been proposed for continual learning in spiking neural networks. This approach addresses a key limitation of existing methods like Elastic Weight Consolidation and Synaptic Intelligence, which depend on gradient computation and are incompatible with neuromorphic hardware lacking backpropagation support. The method analyzes the Coefficient of Variation of Inter-Spike Intervals to identify neurons with regular firing patterns that encode stable, task-relevant features. These neurons are protected from overwriting during new task acquisition, while neurons with irregular firing patterns are allowed to adapt freely. The technique requires only spike time counters and integer arithmetic, operations that are native to all neuromorphic chips. This development enables spiking neural networks to acquire new tasks sequentially without forgetting prior knowledge, a capability essential for deployment in dynamic real-world environments including nuclear digital twin monitoring and grid-edge fault detection. The research was published on arXiv with identifier 2604.16496v1.
Key facts
- ISI-CV is the first gradient-free synaptic importance metric for SNN continual learning
- The method uses Coefficient of Variation of Inter-Spike Intervals to identify important neurons
- Neurons with regular firing patterns are protected from overwriting
- Neurons with irregular firing patterns are permitted to adapt freely
- The technique requires only spike time counters and integer arithmetic
- All required operations are native to every neuromorphic chip
- Existing methods like EWC and SI rely on gradient computation
- Continual learning allows neural networks to acquire new tasks without forgetting prior knowledge
Entities
Institutions
- arXiv