On combining feasibility, descent and superlinear convergence in inequality constrained optimization
Tóm tắt
Extension of quasi-Newton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution, of first order approximations to the constraints, resulting in infeasibility of the quadratic programs; and the task of selecting a suitable merit function, to induce global convergence. In ths case of inequality constrained optimization, both of these difficulties disappear if the algorithm is forced to generate iterates that all satisfy the constraints, and that yield monotonically decreasing objective function values. (Feasibility of the successive iterates is in fact required in many contexts such as in real-time applications or when the objective function is not well defined outside the feasible set.) It has been recently shown that this can be achieved while preserving local two-step superlinear convergence. In this note, the essential ingredients for an SQP-based method exhibiting the desired properties are highlighted. Correspondingly, a class of such algorithms is described and analyzed. Tests performed with an efficient implementation are discussed.
Tài liệu tham khảo
“Harwell subroutine library,”Library Reference Manual (Harwell, England, 1985).
P.T. Boggs and J.W. Tolle, “A strategy for global convergence in a sequential quadratic programming algorithm,”SIAM Journal on Numerical Analysis 26 (1989) 600–623.
G. DiPillo and L. Grippo, “A continuously differentiable exact penalty function for nonlinear programming with inequality constraints,”SIAM Journal on Control and Optimization 23 (1985) 72–84.
M.K.H. Fan, L.-S. Wang, J. Koninckx and A.L. Tits, “Software package for optimization-based design with user-supplied simulators,”IEEE Control System Magazine 9 (1989).
R. Fletcher, “Numerical experiments with an exactL 1 penalty function method,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 4 (Academic Press, New York, 1981) pp. 99–129.
P.E. Gill, W. Murray, M.A. Saunders and M.H. Wright, “User's guide for QPSOL (version 3.2): A Fortran package for quadratic programming,” Technical Report SOL 84-6, Systems Optimization Laboratory, Stanford University (Stanford, CA, 1984).
S.P. Han, “A globally convergent method for nonlinear programming,”Journal of Optimization Theory and Applications 22 (1977) 297–309.
W. Hock and K. Schittkowski,Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems No. 187 (Springer, Berlin, 1981).
D.Q. Mayne and E. Polak, “A superlinearly convergent algorithm for constrained optimization problems,”Mathematical Programming Studies 16 (1982) 45–61.
E.R. Panier and A.L. Tits, “A superlinearly convergent feasible method for the solution of inequality constrained optimization problems,”SIAM Journal on Control and Optimization 25 (1987) 934–950.
E.R. Panier, A.L. Tits and J.N. Herskovits, “A QP-free, globally convergent locally superlinearly convergent algorithm for inequality constrained optimization,”SIAM Journal on Control and Optimization 26 (1988) 788–811.
M.J.D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations,” in: G.A. Watson, ed.,Numerical Analysis, Dundee, 1977, Lecture Notes in Mathematics No. 630 (Springer, Berlin, 1978) pp. 144–157.
M.J.D. Powell, “The convergence of variable metric methods for nonlinearly constrained optimization calculations,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 3 (Academic Press, New York, 1978) pp. 27–63.
M.J.D. Powell and Y.X. Yuan, “A recursive quadratic programming algorithm that uses differentiable exact penalty functions,”Mathematical Programming 35 (1986) 265–278.
S.M. Robinson, “Perturbed Kuhn—Tucker points and rates of convergence for a class of nonlinear-programming algorithms,”Mathematical Programming 9 (1974) 1–16.
J. Stoer, “The convergence of sequential quadratic programming methods for solving nonlinear programs,” in: R.E. Kalman, G.I. Marchuk, A.E. Ruberti and A.J. Viterbi, eds.,Recent Advances in Communication and Control Theory (Optimization Software, New York, 1987) pp. 412–421.
J. Zhou and A.L. Tits, “User's guide for FSQP version 2.3. A Fortran code for solving optimization problems, possibly minimax, with general inequality constraints and linear equality constraints, generating feasible iterates,” SRC TR-90-60r1b, Systems Research Center, University of Maryland (College Park, MD, 1991).