학술논문

A Masked Hardware Accelerator for Feed-Forward Neural Networks With Fixed-Point Arithmetic
Document Type
Periodical
Source
IEEE Transactions on Very Large Scale Integration (VLSI) Systems IEEE Trans. VLSI Syst. Very Large Scale Integration (VLSI) Systems, IEEE Transactions on. 32(2):231-244 Feb, 2024
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Artificial neural networks
Hardware acceleration
Probes
Software
Power demand
Wires
Training
Countermeasure
hardware
masking
neural network (NN) accelerator
side-channel analysis (SCA)
Language
ISSN
1063-8210
1557-9999
Abstract
Neural network (NN) execution on resource-constrained edge devices is increasing. Commonly, hardware accelerators are introduced in small devices to support the execution of NNs. However, an attacker can often gain physical access to edge devices. Therefore, side-channel attacks are a potential threat to obtain valuable information about the NN. In order to keep the network secret and protect it from extraction, countermeasures are required. In this article, we propose a masked hardware accelerator for feed-forward NNs that utilizes fixed-point arithmetic and is protected against side-channel analysis (SCA). We use an existing arithmetic masking scheme and improve it to prevent incorrect results. Moreover, we transfer the scheme to the hardware layer by utilizing the glitch-extended probing model and demonstrate the security of the individual modules. To exhibit the effectiveness of the masked design, we implement it on an FPGA and measure the power consumption. The results show that with two million measurements, no secret information is leaked by means of a $t$ -test. In addition, we compare our accelerator with the masked software implementation and other hardware designs. The comparison indicates that our accelerator is up to 38 times faster than software and improves the throughput by a factor of about 4.1 compared to other masked hardware accelerators.