학술논문

A 1.625 TOPS/W SOC for Deep CNN Training and Inference in 28nm CMOS
Document Type
Conference
Source
ESSDERC 2021 - IEEE 51st European Solid-State Device Research Conference (ESSDERC) Solid-State Device Research Conference (ESSDERC), ESSDERC 2021 - IEEE 51st European. :107-110 Sep, 2021
Subject
Components, Circuits, Devices and Systems
Photonics and Electrooptics
Training
Convolution
Conferences
Prototypes
Europe
Energy efficiency
System-on-chip
machine learning
low-precision neural network
SOC
AI accelerator
Language
Abstract
This work presents a FloatSD8-based system on chip (SOC) for the inference as well as the training of a convolutional neural networks (CNNs). A novel number format (FloatSD8) is employed to reduce the computational complexity of the convolution circuit. By co-designing data representation and circuit, we demonstrate that the AISOC can achieve high convolution performance and optimal energy efficiency without sacrificing the quality of training. At its normal operating condition (200MHz), the AISOC prototype is capable of 0.69 TFLOPS peak performance and 1.625 TOPS/W in 28nm CMOS.