학술논문

An Energy Efficient Soft SIMD Microarchitecture and Its Application on Quantized CNNs
Document Type
Periodical
Source
IEEE Transactions on Very Large Scale Integration (VLSI) Systems IEEE Trans. VLSI Syst. Very Large Scale Integration (VLSI) Systems, IEEE Transactions on. 32(6):1018-1031 Jun, 2024
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Microarchitecture
Quantization (signal)
Hardware
Arithmetic
Encoding
Software
Multiplexing
Canonical signed digit (CSD) coding
data-level parallelism
energy efficient computing
heterogeneously quantized (HQ) convolutional neural networks (CNNs)
software-defined single instruction multiple data (Soft SIMD)
Language
ISSN
1063-8210
1557-9999
Abstract
The ever-increasing computational complexity and energy consumption of today’s applications, such as machine learning (ML) algorithms, not only strain the capabilities of the underlying hardware but also significantly restrict their wide deployment at the edge. Addressing these challenges, novel architecture solutions are required by leveraging opportunities exposed by algorithms, e.g., robustness to small-bitwidth operand quantization and high intrinsic data-level parallelism. However, traditional hardware single instruction multiple data (Hard SIMD) architectures only support a small set of operand bitwidths, limiting performance improvement. To fill the gap, this manuscript introduces a novel pipelined processor microarchitecture for arithmetic computing based on the software-defined SIMD (Soft SIMD) paradigm that can define arbitrary SIMD modes through control instructions at run-time. This microarchitecture is optimized for parallel fine-grained fixed-point arithmetic, such as shift/add. It can also efficiently execute sequential shift-add-based multiplication over SIMD subwords, thanks to zero-skipping and canonical signed digit (CSD) coding. A lightweight repacking unit allows changing subword bitwidth dynamically. These features are implemented within a tight energy and area budget. An energy consumption model is established through post-synthesis for performance assessment. We select heterogeneously quantized (HQ) convolutional neural networks (CNNs) from the ML domain as the benchmark and map it onto our microarchitecture. Experimental results showcase that our approach dramatically outperforms traditional Hard SIMD Multiplier-Adder regarding area and energy requirements. In particular, our microarchitecture occupies up to 59.9% less area than a Hard SIMD that supports fewer SIMD bitwidths, while consuming up to 50.1% less energy on average to execute HQ CNNs.