학술논문
Phase Change Memory-based Hardware Accelerators for Deep Neural Networks (invited)
Document Type
Conference
Author
Burr, Geoffrey W.; Narayanan, P.; Ambrogio, S.; Okazaki, A.; Tsai, H.; Hosokawa, K.; Mackin, C.; Nomura, A.; Yasuda, T.; Demarest, J.; Brew, K. W.; Chan, V.; Choi, S.; Gordon, T.; Levin, T. M.; Friz, A.; Ishii, M.; Kohda, Y.; Chen, A.; Fasoli, A.; Luquin, J.; Saulnier, N.; Teehan, S.; Ahsan, I.; Narayanan, V.
Source
2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits) VLSI Technology and Circuits (VLSI Technology and Circuits), 2023 IEEE Symposium on. :1-2 Jun, 2023
Subject
Language
ISSN
2158-9682
Abstract
Analog non-volatile memory (NVM)-based accelerators for deep neural networks implement multiply-accumulate (MAC) operations – in parallel, on large arrays of resistive devices – by using Ohm’s law and Kirchhoff’s current law. By completely avoiding weight motion, such fully weight-stationary systems can offer a unique combination of low latency, high throughput, and high energy-efficiency (e.g., high TeraOPS/W). Yet since most Deep Neural Networks (DNNs) require only modest (e.g., 4-bit) precision in synaptic operations, such systems can still deliver “software-equivalent” accuracies on a wide range of models. We describe a 14-nm inference chip, comprising multiple 512×512 arrays of Phase Change Memory (PCM) devices, which can deliver software-equivalent inference accuracy for MNIST handwritten-digit recognition and recurrent LSTM benchmarks, and discuss various PCM challenges such as conductance drift and noise.