Brain-Inspired Continual Learning via Adaptive Neural Pathway Reorganization
Researchers have introduced a novel continual learning algorithm inspired by the brain, designed for spiking neural networks (SNNs). This algorithm, called SOR-SNN (Self-Organizing Regulation SNN), employs Self-Organizing Regulation networks to adaptively reorganize a single, constrained SNN into extensive sparse neural pathways. This method reflects the human brain's capability to self-organize various neural pathways for gradually mastering numerous cognitive tasks. It effectively overcomes a significant drawback of current continual learning algorithms for both deep artificial and SNNs, which often fail to self-regulate limited resources, resulting in decreased performance and higher energy use as tasks increase. SOR-SNN consistently outperforms in terms of performance, energy efficiency, and memory capacity across a range of continual learning tasks, including both simple and complex challenges like generalized CIFAR benchmarks. The research can be found on arXiv with the identifier 2309.09550v4.
Key facts
- The algorithm is called SOR-SNN (Self-Organizing Regulation Spiking Neural Network).
- It employs Self-Organizing Regulation networks to reorganize neural pathways.
- The method is inspired by the human brain's ability to self-organize sparse neural pathways.
- It addresses resource auto-regulation issues in continual learning for SNNs.
- SOR-SNN shows superiority in performance, energy consumption, and memory capacity.
- Tasks range from child-like simple to complex, including generalized CIFAR.
- The paper is published on arXiv with ID 2309.09550v4.
- The approach enables incremental learning without performance drop or energy rise.
Entities
Institutions
- arXiv