학술논문

Towards Effective Training of Robust Spiking Recurrent Neural Networks Under General Input Noise via Provable Analysis
Document Type
Conference
Source
2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) Computer Aided Design (ICCAD), 2023 IEEE/ACM International Conference on. :1-9 Oct, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Engineering Profession
General Topics for Engineers
Signal Processing and Analysis
Training
Perturbation methods
Computational modeling
Linear programming
Robustness
Windows
Convolutional neural networks
Language
ISSN
1558-2434
Abstract
Recently, bio-inspired spiking neural networks (SNN) with recurrent structures (SRNN) have received increasingly more attention due to their appealing properties for energy-efficiently solving time-domain classification tasks. SRNN s are often executed in noisy environments on resource-constrained devices which can however greatly compromise its accuracy. Thus, one fundamental question that remains unanswered is whether a formal analysis under the general input noise disturbances can be obtained to guarantee the robustness of SRNNs. Several studies have shown great promises by optimizing the bound over adverse perturbations based on Lipschitz continuity theorem, but most of these theoretical analysis are confined to convolutional neural networks (CNN). In this work, we take a further step towards robust SRNN training via provable robustness analysis over input noise perturbations. We show that it is feasible to establish bound analysis for evaluating noise sensitivity for SRNN by using the relation between the input current and the membrane potential change magnitude across a time window. Inspired by the theoretical analysis, we next propose a targeted penalty term in the objective function for training robust SRNN. Experimental results show that our solution outperforms the more complicated state-of-the-art methods on the commonly tested Fashion MNIST and CIFAR-IO image classification datasets.