학술논문

A Progressive Stacking Pseudoinverse Learning Framework via Active Learning in Random Subspaces
Document Type
Periodical
Source
IEEE Transactions on Systems, Man, and Cybernetics: Systems IEEE Trans. Syst. Man Cybern, Syst. Systems, Man, and Cybernetics: Systems, IEEE Transactions on. 54(5):2822-2832 May, 2024
Subject
Signal Processing and Analysis
Robotics and Control Systems
Power, Energy and Industry Applications
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Training
Stacking
Sun
Nonhomogeneous media
Mathematical models
Cybernetics
Adaptation models
Active learning (AL)
generalization performance
progressive stack
random subspace
stacking pseudoinverse learner (SP)
Language
ISSN
2168-2216
2168-2232
Abstract
Stacking pseudoinverse learner (SP) is an ensemble learning technology, and its generalization performance greatly affects the effect of image classification. Currently, most SPs randomly initialize the input weight matrix in a random subspace without limiting the random initial values, resulting in unstable training results and a decrease in generalization performance; in addition, training all samples at once may cause the classifier redundant and also affect the generalization performance of the model. To efficiently address the above issues, we propose a new framework called progressive stacking pseudoinverse learner (PSP), which aims to enhance the generalization performance of SP via active learning (AL) in random subspaces. Specifically, on the one hand, a random feature SP (RFSP) model is proposed, which constrains the random subspace by initializing the input weight matrix into different random specific distributions to improve the generalization performance of SP. On the other hand, an AL progressive (ALP) model based on RFSP is proposed. By iteratively selecting useful samples to optimize the classification results, the training sample information is effectively used to progressively enhance the generalization performance of the model. Experimental results on three public datasets show that our proposed PSP algorithm achieves better performance in accuracy, precision, recall, and $F1$ score, and the results are competitive with state-of-the-art methods.