학술논문

A 1.15-TOPS 6.57-TOPS/W Neural Network Processor for Multi-Scale Object Detection With Reduced Convolutional Operations
Document Type
Periodical
Source
IEEE Journal of Selected Topics in Signal Processing IEEE J. Sel. Top. Signal Process. Selected Topics in Signal Processing, IEEE Journal of. 14(4):634-645 May, 2020
Subject
Signal Processing and Analysis
Object detection
Convolution
Bandwidth
Deconvolution
Automobiles
Feature extraction
Random access memory
Automated driving
Convolutional neural network
Multi-scale object detection
Language
ISSN
1932-4553
1941-0484
Abstract
For automated driving cars, we present a 40-nm dedicated object detection processor with only three operations: 3 × 3 convolution, 1 × 1 convolution, and 4 × 4 deconvolution. Multi-scale object detection at high recognition accuracy is possible by virtue of the deconvolution feature and concatenation. The input memory for a feature map has 8-bit width. A multiplier for the inputs has 8-bit precision. Partial-sum memory, however, has 16-bit width to suppress detection accuracy deterioration in a layer with 1024 channels in the target network. By fixed-point bit precision, the external memory bandwidth and internal memory capacity are reduced. Optimized parallelization in input and output channels reduces the external memory bandwidth to 0.062 billion accesses per 1280 × 384 image with internal memory capacity of 400 kB. The detection error is 1.9% of that using single-precision floating point. The maximum operating frequency is 500 MHz at a supply voltage of 1 V. Its peak performance is 1.15 TOPS. The maximum energy efficiency is 6.57 TOPS/W at 174 MHz and 0.6 V.