학술논문

Logarithmic Continual Learning
Document Type
Periodical
Source
IEEE Access Access, IEEE. 10:117001-117010 2022
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Deep learning
Data models
Memory management
Computational modeling
Learning systems
Image reconstruction
Adaptation models
Continual learning
deep learning
incremental learning
rehearsal methods
Language
ISSN
2169-3536
Abstract
We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a current task. Contemporary CL methods employ generative models to replay previous samples and train them recursively with a combination of current and regenerated past data. This recurrence leads to superfluous computations as the same past samples are regenerated after each task, and the reconstruction quality successively degrades. In this work, we address these limitations and propose a new generative rehearsal architecture that requires, at most, a logarithmic number of retraining sessions for each sample. Our approach leverages the allocation of past data in a set of generative models such that most of them do not require retraining after a task. The experimental evaluation of our logarithmic continual learning approach shows the superiority of our method with respect to the state-of-the-art generative rehearsal methods.