학술논문

A modified higher-order feed forward neural network with smoothing regularization
Document Type
TEXT
Source
Neural network world: international journal on neural and mass-parallel computing and information systems | 2017 Volume:27 | Number:6
Subject
577-592
Language
English
Abstract
This paper proposes an offline gradient method with smoothing L1/2 regularization for learning and pruning of the pi-sigma neural networks (PSNNs). The original L1/2 regularization term is not smooth at the origin, since it involves the absolute value function. This causes oscillation in the computation and difficulty in the convergence analysis. In this paper, we propose to use a smooth function to replace and approximate the absolute value function, ending up with a smoothing L1/2 regularization method for PSNN. Numerical simulations show that the smoothing L1/2 regularization method eliminates the oscillation in computation and achieves better learning accuracy. We are also able to prove a convergence theorem for the proposed learning method.