학술논문

ASIP Accelerator for LUT-based Neural Networks Inference
Document Type
Conference
Source
2022 20th IEEE Interregional NEWCAS Conference (NEWCAS) Interregional NEWCAS Conference (NEWCAS), 2022 20th IEEE. :524-528 Jun, 2022
Subject
Components, Circuits, Devices and Systems
Signal Processing and Analysis
Energy consumption
Costs
Instruction sets
Neurons
Computer architecture
Throughput
Hardware
DNN
FPGA
ASIP Processor
LUT-based Neural Networks
Decision Trees
Language
Abstract
Binarized Neural Networks (BNNs) offer the promise of low power and high throughput, but this is difficult to achieve on regular processors. A considerable amount of research has been devoted to mapping BNNs to specialized hardware, especially FPGAs, setting aside the flexibility of instruction-set processors. This paper introduces a configurable VLIW processor with a specialized instruction set to efficiently compute the inference of Look-Up-Table based artificial binary neurons in a single clock cycle. Our experiments show that the processor achieves an increased throughput of 2994 when compared to the inference done on a base processor. ×