학술논문

Floating-Point Formats and Arithmetic for Highly Accurate Multi-Layer Perceptrons
Document Type
Conference
Source
2023 IEEE 23rd International Conference on Nanotechnology (NANO) Nanotechnology (NANO), 2023 IEEE 23rd International Conference on. :587-591 Jul, 2023
Subject
Bioengineering
Components, Circuits, Devices and Systems
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
General Topics for Engineers
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Training
Measurement
Simulation
Artificial neural networks
IEEE Standards
Hardware
Nanotechnology
Language
ISSN
1944-9380
Abstract
The data precision can significantly affect the accuracy and overhead metrics of hardware accelerators for different applications such as artificial neural networks (ANNs). This paper evaluates the inference and training of multi-layer perceptrons (MLPs), in which initially IEEE standard floating-point (FP) precisions (half, single and double) are utilized separately and then compared with mixed-precision FP formats. The mixed-precision calculations are investigated for three critical propagation modules (activation functions, weight updates, and accumulation units). Compared with applying a simple low-precision format, the mixed-precision format prevents an accuracy loss and the occurrence of overflow/underflow in the MLPs while potentially incurring in less hardware overhead in terms of area/power. As the multiply-accumulation is the most dominant operation in trending ANNs, a fully pipelined hardware implementation for the fused multiply-add units is proposed for different IEEE FP formats to achieve a very high operating frequency.