학술논문

Evolving stochastic configure network: A more compact model with interpretability.
Document Type
Article
Source
Information Sciences. Aug2023, Vol. 639, pN.PAG-N.PAG. 1p.
Subject
*MACHINE learning
*LOGICAL prediction
*EVOLUTIONARY algorithms
Language
ISSN
0020-0255
Abstract
Stochastic Configure Networks (SCNs) are an incremental variant of randomly weighted neural networks of which one of the key highlights lies in the constraints while adding hidden layer nodes. Recent studies show two weaknesses of SCNs. One is the redundancy of added hidden layer nodes while the other is the lack of interpretability to the designed constraints. Motivated by overcoming these weaknesses, this paper proposes a new model named Evolving SCN to increase the interpretability of the design for constraints from the viewpoint of evolution by a sampling mechanism and to promote the model compactness by optimizing random weights within the space of constraint parameters. Surprisingly, although an evolving method is used in our model, the effectiveness and efficiency are significantly improved with respect to running time and prediction accuracy in comparison with the existing versions of SCNs. This work makes a first attempt to enhance the interpretability of incrementally adding nodes and simultaneously reduce the redundancy of hidden nodes in SCNs, which brings some new insights to understand the model compactness from the view of Occam's razor and further to the nature of incremental learning. • An accessible interpretation of the designed constraints in SCN is summarized, extending the fundamental theory. • A new evolution mechanism is proposed to optimize random weights, addressing hidden node redundancy in SCN, i.e., E-SCN. • Real-world dataset experiments reveal E-SCN's improved compactness, training efficiency, and generalization ability. [ABSTRACT FROM AUTHOR]