학술논문

Enhancing DNN Training Efficiency Via Dynamic Asymmetric Architecture
Document Type
Periodical
Source
IEEE Computer Architecture Letters IEEE Comput. Arch. Lett. Computer Architecture Letters. 22(1):49-52 Jan, 2023
Subject
Computing and Processing
Training
Computer architecture
Routing
Quantization (signal)
Throughput
Hardware
Computational modeling
Neural nets
training
approximation
dataflow architectures
Language
ISSN
1556-6056
1556-6064
2473-2575
Abstract
Deep neural networks (DNNs) require abundant multiply-and-accumulate (MAC) operations. Thanks to DNNs’ ability to accommodate noise, some of the computational burden is commonly mitigated by quantization–that is, by using lower precision floating-point operations. Layer granularity is the preferred method, as it is easily mapped to commodity hardware. In this paper, we propose Dynamic Asymmetric Architecture (DAA), in which the micro-architecture decides what the precision of each MAC operation should be during runtime. We demonstrate a DAA with two data streams and a value-based controller that decides which data stream deserves the higher precision resource. We evaluate this mechanism in terms of accuracy on a number of convolutional neural networks (CNNs) and demonstrate its feasibility on top of a systolic array. Our experimental analysis shows that DAA potentially achieves 2x throughput improvement for ResNet-18 while saving 35% of the energy with less than 0.5% degradation in accuracy.