학술논문

Learning Algorithms in Quaternion Neural Networks Using GHR Calculus
Document Type
TEXT
Source
Neural network world: international journal on neural and mass-parallel computing and information systems | 2017 Volume:27 | Number:3
Subject
quaternion neural netwirks
non-analytic quaternion activation functions
GHR calculus
learning algorithms
convergence
Language
English
Abstract
One difficulty for quaternion neural networks (QNNs) is that quaternion nonlinear activation functions are usually non-analytic and thus quaternion derivatives cannot be used. In this paper, we derive the quaternion gradient descent, approximated quaternion Gauss-Newton and quaternion Levenberg-Marquardt algorithms for feedforward QNNs based on the GHR calculus, which is suitable for analytic and non-analytic quaternion functions. Meanwhile, we solve a widely linear quaternion least squares problem in the derivation of quaternion Gauss-Newton algorithm, which is more general than the usual least squares probŹlem. A rigorous analysis of the convergence of the proposed algorithms is provided. Simulations on the prediction of benchmark signals support the approach.