Accelerated gradient learning algorithm for neural network weights update
Tóm tắt
This work proposes decomposition of square approximation algorithm for neural network weights update. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on multilayer perceptrons neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings up to 40% in multiple thread execution.