학술논문

Floating-Point Approximation Enabling Cost-Effective and High-Precision Digital Implementation of FitzHugh-Nagumo Neural Networks
Document Type
Periodical
Source
IEEE Transactions on Biomedical Circuits and Systems IEEE Trans. Biomed. Circuits Syst. Biomedical Circuits and Systems, IEEE Transactions on. 18(2):347-360 Apr, 2024
Subject
Bioengineering
Components, Circuits, Devices and Systems
Neurons
Computational modeling
Hardware
Mathematical models
Biological system modeling
Biology
Brain modeling
FitzHugh-Nagumo neuron
floating-point approximation algorithm
circular neural network
digital implementation
Language
ISSN
1932-4545
1940-9990
Abstract
The study of neuron interactions and hardware implementations are crucial research directions in neuroscience, particularly in developing large-scale biological neural networks. The FitzHugh-Nagumo (FHN) model is a popular neuron model with highly biological plausibility, but its complexity makes it difficult to apply at scale. This paper presents a cost-saving and improved precision approximation algorithm for the digital implementation of the FHN model. By converting the computational data into floating-point numbers, the original multiplication calculations are replaced by adding the floating-point exponent part and fitting the mantissa part with piecewise linear. In the hardware implementation, shifters and adders are used, greatly reducing resource overhead. Implementing FHN neurons by this approximation calculations on FPGA reduces the normalized root mean square error (RMSE) to 3.5% of the state-of-the-art (SOTA) while maintaining a performance overhead ratio improvement of 1.09 times. Compared to implementations based on approximate multipliers, the proposed method achieves a 20% reduction in error at the cost of a 2.8% increase in overhead.This model gained additional biological properties compared to LIF while reducing the deployment scale by only 9%. Furthermore, the hardware implementation of nine coupled circular networks with eight nodes and directional diffusion was carried out to demonstrate the algorithm's effectiveness on neural networks. The error decreased to 60% compared to the single neuron of the SOTA. This hardware-friendly algorithm allows for the low-cost implementation of high-precision hardware simulation, providing a novel perspective for studying large-scale, biologically plausible neural networks.