Global Convergence of a Memory Gradient Method for Unconstrained Optimization
Tóm tắt
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.
Tài liệu tham khảo
L. Adams and J.L. Nazareth (eds.), Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, 1996.
M.Al-Baali, “Descent property and global convergence of the Fletcher-Reeves method with inexact line search,” IMA Journal of Numerical Analysis, vol. 5, pp. 121–124, 1985.
E.E. Cragg and A.V. Levy, “Study on a supermemory gradient method for the minimization of functions,” Journal of Optimization Theory and Applications, vol. 4, pp. 191–205, 1969.
J.C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, pp. 21–42, 1992.
L. Grippo, F. Lampariello, and S. Lucidi, “A truncated Newton method with nonmonotone line search for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 60, pp. 401–419, 1989.
A. Miele and J.W. Cantrell, “Study on a memory gradient method for the minimization of functions,” Journal of Optimization Theory and Applications, vol. 3, pp. 459–470, 1969.
J.J. Moré, B.S. Garbow, and K. E. Hillstrom,“Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, pp. 17–41, 1981.
J.J.Moré and D.J. Thuente,“Line search algorithms with guaranteed sufficient decease,” ACM Transactions on Mathematical Software, vol. 20, pp. 286–307, 1994.
L. Nazareth, “A conjugate direction algorithm without line searches,” Journal of Optimization Theory and Applications, vol. 23, pp. 373–387, 1977.
J.Nocedal, “Updating quasi-Newton matrices with limited storage, ” Mathematics of Computation, vol.35, pp. 773-782, 1980.
J. Nocedal, http://www.ece.northwestern.edu/nocedal/software.html.
J. Nocedal and S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer Verlag, New York, 1999.
J.Z. Zhang, N.Y. Deng, and L.H. Chen, “New quasi-Newton equation and related methods for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 102, pp. 147–167, 1999.
J. Zhang and C. Xu, “Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations,” Journal of Computational and Applied Mathematics, vol. 137, pp. 269–278, 2001.
G. Zoutendijk, Nonlinear Programming, Computational Methods, in Integer and Nonlinear Programming, J. Abadie, (ed.), North-Holland, Amsterdam, pp. 37–86, 1970.