BadSNN Backdoor Attack Exploits Spiking Neuron Vulnerabilities
A new study proposes BadSNN, a backdoor attack targeting Spiking Neural Networks (SNNs) by exploiting hyperparameter variations in spiking neurons. Unlike conventional attacks on Deep Neural Networks (DNNs), BadSNN leverages the unique Leaky Integrate-and-Fire (LIF) neuron model, which includes membrane potential threshold and time constant hyperparameters. The attack poisons training datasets with malicious triggers, forcing the SNN to behave in an attacker-defined manner. Published on arXiv (2602.07200), the research highlights underexplored security risks in energy-efficient SNNs, which are biologically plausible alternatives to DNNs. The paper demonstrates how adversaries can manipulate SNN-specific characteristics, raising concerns for applications in neuromorphic computing and edge AI.
Key facts
- BadSNN is a backdoor attack on Spiking Neural Networks.
- It exploits hyperparameter variations in spiking neurons.
- The attack uses the Leaky Integrate-and-Fire (LIF) neuron model.
- Hyperparameters targeted include membrane potential threshold and membrane time constant.
- The attack poisons training datasets with malicious triggers.
- SNNs are energy-efficient counterparts of Deep Neural Networks (DNNs).
- The research was published on arXiv with ID 2602.07200.
- The paper explores underexplored security vulnerabilities in SNNs.
Entities
Institutions
- arXiv