Paper Details
Abstract
Spiking Neural Networks (SNNs) offer energy-efficient and biologically plausible alternatives to conventional Artificial Neural Networks (ANNs). The Spike-driven Transformer architecture integrates SNN efficiency with Transformer-based feature extraction, achieving competitive results in image classification. However, its reliance on the Leaky Integrate-and-Fire (LIF) neuron introduces computational overhead due to the leak mechanism. This work investigates alternative neuron models—IF Hard Reset and IF Soft Reset—which remove the leak dynamics to improve efficiency. We conduct systematic evaluations on CIFAR-10 dataset, analyzing accuracy, inference speed and spike activity patterns across different neuron models. Experimental results show that IF Soft Reset achieves the highest accuracy (94.53%) and fastest inference speed (1323.4 FPS, 12.09 ms latency), outperforming IF Hard Reset (94.44%, 1244.2 FPS) and LIF (94.34%, 1161.2 FPS). The improvement is attributed to its gradual reset behavior, which preserves residual excitation and enhances temporal processing. These findings offer practical design guidelines for deploying efficient spike-based Transformers in resource-constrained environments.