학술논문

Deep learning acceleration in 14nm CMOS compatible ReRAM array: device, material and algorithm co-optimization
Document Type
Conference
Source
2022 International Electron Devices Meeting (IEDM) Electron Devices Meeting (IEDM), 2022 International. :33.7.1-33.7.4 Dec, 2022
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Deep learning
Performance evaluation
Switches
Programming
Dynamic range
Hardware
Language
ISSN
2156-017X
Abstract
We show for the first time in hardware that in contrast to conventional stochastic gradient descent (SGD), our modified SGD algorithm (TTv2) together with a co-optimized ReRAM material achieves respectable accuracy (98%) on reduced MNIST classification (0 & 1), approaching a floating point (FP) baseline. To extrapolate these insights towards larger DNN training workloads in simulations, we establish an analog switching test sequence and extract key device statistics from 6T1R ReRAM arrays (up to 2k devices) built on a 14nm CMOS baseline. With this, we find that for larger DNN workloads, device and algorithm co-optimization shows dramatic improvements in comparison to standard SGD and baseline ReRAM. The gap to the reference floating-point accuracy across various tested DNNs indicates that further material and algorithmic optimizations are still needed. This work shows a pathway for scalable in-memory deep learning training using ReRAM crossbar arrays.