학술논문

A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks
Document Type
Periodical
Source
IEEE Transactions on Cognitive and Developmental Systems IEEE Trans. Cogn. Dev. Syst. Cognitive and Developmental Systems, IEEE Transactions on. 16(2):544-558 Apr, 2024
Subject
Computing and Processing
Signal Processing and Analysis
Neurons
Training
Biology
Learning systems
Information processing
Membrane potentials
Biological neural networks
Joint decision framework (JDF)
spike threshold
spiking neural networks (SNNs)
synaptic plasticity
synergistic learning
Language
ISSN
2379-8920
2379-8939
Abstract
Spiking neural networks (SNNs) have demonstrated excellent capabilities in various intelligent scenarios. Most existing methods for training SNNs are based on the concept of synaptic plasticity; however, learning in the realistic brain also utilizes intrinsic nonsynaptic mechanisms of neurons. The spike threshold of biological neurons is a critical intrinsic neuronal feature that exhibits rich dynamics on a millisecond timescale and has been proposed as an underlying mechanism that facilitates neural information processing. In this study, we develop a novel synergistic learning approach that involves simultaneously training synaptic weights and spike thresholds in SNNs. SNNs trained with synapse-threshold synergistic learning (STL-SNNs) achieve significantly superior performance on various static and neuromorphic data sets than SNNs trained with two degenerated single-learning models. During training, the synergistic learning approach optimizes neural thresholds, providing the network with stable signal transmission via appropriate firing rates. Further analysis indicates that STL-SNNs are robust to noisy data and exhibit low energy consumption for deep network structures. Additionally, the performance of STL-SNN can be further improved by introducing a generalized joint decision framework. Overall, our findings indicate that biologically plausible synergies between synaptic and intrinsic nonsynaptic mechanisms may provide a promising approach for developing highly efficient SNN learning methods.