Parameter Reconstruction Algorithm for Spiking Neural Network Training
A new algorithm for training Spiking Neural Networks (SNNs) achieves globally optimal solutions by reconstructing parameters, overcoming the limitations of surrogate gradient methods. The approach extends convexification theory from parallel feedforward to parallel recurrent threshold networks, which include SNNs as a special case. The parameter reconstruction method shows consistent advantages across tasks, both standalone and combined with surrogate-gradient training. Ablation studies demonstrate data scalability and robustness to model configurations. The research is published on arXiv (2605.08022).
Key facts
- Spiking Neural Networks (SNNs) are biologically plausible and energy-efficient alternatives to conventional ANNs.
- Training SNNs typically relies on surrogate gradients due to non-differentiable spike functions.
- Surrogate gradients introduce approximation errors that accumulate across layers.
- The work extends convexification of parallel feedforward threshold networks to parallel recurrent threshold networks.
- Parallel recurrent threshold networks subsume parallel SNNs as a structured special case.
- A parameter reconstruction algorithm for SNN training is proposed.
- The algorithm demonstrates advantages across various tasks as a standalone method and in combination with surrogate-gradient training.
- Ablations show data scalability and robustness to model configurations.
Entities
Institutions
- arXiv