학술논문

Synaptic metaplasticity with multi-level memristive devices
Document Type
Conference
Source
2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS) Artificial Intelligence Circuits and Systems (AICAS), 2023 IEEE 5th International Conference on. :1-5 Jun, 2023
Subject
Bioengineering
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Training
Neural networks
Software algorithms
Memristors
Learning (artificial intelligence)
Computer architecture
Hardware
Memory
Metaplasticity
Quantized Neural Networks (QNNs)
In-Memory-Computing
Memristor
On-Chip learning
Language
ISSN
2834-9857
Abstract
Deep learning has made remarkable progress in various tasks, surpassing human performance in some cases. However, one drawback of neural networks is catastrophic forgetting, where a network trained on one task forgets the solution when learning a new one. To address this issue, recent works have proposed solutions based on Binarized Neural Networks (BNNs) incorporating metaplasticity. In this work, we extend this solution to quantized neural networks (QNNs) and present a memristor-based hardware solution for implementing metaplasticity during both inference and training. We propose a hardware architecture that integrates quantized weights in memristor devices programmed in an analog multi-level fashion with a digital processing unit for high-precision metaplastic storage. We validated our approach using a combined software framework and memristor based crossbar array for in-memory computing fabricated in 130 nm CMOS technology. Our experimental results show that a two-layer perceptron achieves 97% and 86% accuracy on consecutive training of MNIST and Fashion-MNIST, equal to software baseline. This result demonstrates immunity to catastrophic forgetting and the resilience to analog device imperfections of the proposed solution. Moreover, our architecture is compatible with the memristor limited endurance and has a 15× reduction in memory footprint compared to the binarized neural network case.