학술논문

Synchronized Analog Capacitor Arrays for Parallel Convolutional Neural Network Training
Document Type
Conference
Source
2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS) Circuits and Systems (MWSCAS), 2020 IEEE 63rd International Midwest Symposium on. :387-390 Aug, 2020
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Convolution
Circuits and systems
Capacitors
Neural networks
Parallel processing
Synchronization
Multi-layer neural network
Analog memory
Language
ISSN
1558-3899
Abstract
We report a novel Synchronized Analog Capacitor Arrays (SACA) to accelerate Convolution Neural Network (CNN) training. The synchronized cross-point capacitor arrays, functioning as replicated weights kernels, train on image patches in parallel. Parallel CNN training is challenging in analog arrays because of weight divergence in the replicated kernel. Capacitor arrays solve this problem by charge sharing between correlated capacitor in the kernel replicas to keep them synchronized. Using SACA, we show we can accelerate CNN training by >100x compared to other analog accelerators.