학술논문

A Continual Learning Algorithm Based on Orthogonal Gradient Descent Beyond Neural Tangent Kernel Regime
Document Type
Periodical
Source
IEEE Access Access, IEEE. 11:85395-85404 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Kernel
Training
Neural networks
Predictive models
Computational modeling
Principal component analysis
Learning systems
Taylor series
Catastrophic forgetting
continual learning
neural tangent kernel
orthogonal gradient descent
orthogonal projection
Language
ISSN
2169-3536
Abstract
Continual learning aims to enable neural networks to learn new tasks without catastrophic forgetting of previously learned knowledge. Orthogonal Gradient Descent algorithms have been proposed as an effective solution to mitigate catastrophic forgetting. However, these algorithms often rely on the Neural Tangent Kernel regime, which imposes limitations on network architecture. In this study, we propose a novel method to construct an orthonormal basis set for orthogonal projection by leveraging Catastrophic Forgetting Loss. In contrast to the conventional gradient-based basis that reflects an update of model within an infinitesimal range, our loss-based basis can account for the variance within two distinct points in the model parameter space, thus overcoming the limitations of the Neural Tangent Kernel regime. We provide both quantitative and qualitative analysis of the proposed method, discussing its advantages over conventional gradient-based baselines. Our approach is extensively evaluated on various model architectures and datasets, demonstrating a significant performance advantage, especially for deep or narrow networks where the Neural Tangent Kernel regime is violated. Furthermore, we offer a mathematical analysis based on higher-order Taylor series to provide theoretical justification. This study introduces a novel theoretical framework and a practical algorithm, potentially inspiring further research in areas such as continual learning, network debugging, and one-pass learning.