학술논문

Phase Change Memory-based Hardware Accelerators for Deep Neural Networks (invited)
Document Type
Conference
Source
2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits) VLSI Technology and Circuits (VLSI Technology and Circuits), 2023 IEEE Symposium on. :1-2 Jun, 2023
Subject
Components, Circuits, Devices and Systems
Phase change materials
Phased arrays
Handwriting recognition
Nonvolatile memory
Artificial neural networks
Very large scale integration
Throughput
Phase-change memory
Non-volatile memory
inference acceleration
analog multiply-accumulate for DNNs
analog AI
Language
ISSN
2158-9682
Abstract
Analog non-volatile memory (NVM)-based accelerators for deep neural networks implement multiply-accumulate (MAC) operations – in parallel, on large arrays of resistive devices – by using Ohm’s law and Kirchhoff’s current law. By completely avoiding weight motion, such fully weight-stationary systems can offer a unique combination of low latency, high throughput, and high energy-efficiency (e.g., high TeraOPS/W). Yet since most Deep Neural Networks (DNNs) require only modest (e.g., 4-bit) precision in synaptic operations, such systems can still deliver “software-equivalent” accuracies on a wide range of models. We describe a 14-nm inference chip, comprising multiple 512×512 arrays of Phase Change Memory (PCM) devices, which can deliver software-equivalent inference accuracy for MNIST handwritten-digit recognition and recurrent LSTM benchmarks, and discuss various PCM challenges such as conductance drift and noise.