Parallel and separable recursive Levenberg-Marquardt training algorithm

V.S. Asirvadam1, S.F. McLoone1, G.W. Irwin1
1Intelligent Systems and Control Research Group, School of Electrical and Electronic Engineering, Queen''s University Belfast, UK

Tóm tắt

A novel decomposed recursive Levenberg Marquardt (RLM) algorithm is derived for the training of feedforward neural networks. By neglecting interneuron weight correlations the recently proposed RLM training algorithm can be decomposed at neuron level enabling weights to be updated in an efficient parallel manner. A separable least squares implementation of decomposed RLM is also introduced. Experiment results for two nonlinear time series problems demonstrate the superiority of the new training algorithms.

Từ khóa

#Neurons #Cost function #Neural networks #Least squares methods #Convergence #Partitioning algorithms #Feedforward neural networks #Training data #Backpropagation algorithms #Resonance light scattering

Tài liệu tham khảo

10.1109/ACSSC.1998.750952 10.1080/00207178908559655 10.1109/72.701180 10.1080/00207179008934127 ljung, 1983, Theory and Practice of Recursive Identification 10.1109/CDC.1996.573481 10.1109/72.572103 10.1109/IJCNN.2002.1007667 10.1109/78.847778 10.1016/0888-3270(91)90045-7 sjoberg, 1997, Separable non-linear least-squares minimization possible improvements for neural net fitting, Proc Workshop on Neur Networks for Signal Processing 10.1016/0893-6080(94)90052-3