학술논문

Training and Inference using Approximate Floating-Point Arithmetic for Energy Efficient Spiking Neural Network Processors
Document Type
Conference
Source
2021 International Conference on Electronics, Information, and Communication (ICEIC) Electronics, Information, and Communication (ICEIC), 2021 International Conference on. :1-2 Jan, 2021
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Program processors
Systematics
Energy efficiency
Adders
Biological neural networks
Floating-point arithmetic
spiking neural network (SNN)
leaky integrate-and-fire (LIF) neuron
approximate adder
floating-point arithmetic
Language
Abstract
This paper presents a systematic analysis of spiking neural network (SNN) performance with reduced computation precisions using approximate adders. We propose an IEEE 754-based approximate floating-point adder that applies to the leaky integrate-and-fire (LIF) neuron-based SNN operation for both training and inference. The experimental results under a two-layer SNN for MNIST handwritten digit recognition application show that 4-bit exact mantissa adder with 19-bit approximation for lower-part OR adder (LOA), instead of 23-bit full-precision mantissa adder, can be exploited to maintain good classification accuracy. When adopted LOA as mantissa adder, it can achieve up to 74.1% and 96.5% of power and energy saving, respectively.