AI帮你理解科学
AI 精读
AI抽取本论文的概要总结
微博一下:
The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
SIAM Journal on Optimization, no. 1 (1997): 26-33
EI
关键词
摘要
The Barzilai and Borwein gradient method for the solution of large scale unconstrained minimization problems is considered. This method requires few storage locations and very inexpensive computations. Furthermore, it does not guarantee descent in the objective function and no line search is required. Recently, the global convergence for ...更多
代码:
数据:
简介
- In this paper the authors consider the Barzilai and Borwein gradient method for the large scale unconstrained minimization problem (1)
min f (x), x∈Rn where f : Rn → R. - Every iteration of the Barzilai and Borwein method requires only O(n) floating point operations and a gradient evaluation.
- The search direction is always the negative gradient direction, but the choice of steplength is not the classical choice of the steepest descent method.
- Barzilai and Borwein [1] observed that this new choice of steplength required less computational work and greatly speeded up the convergence of the gradient method for quadratics
重点内容
- In this paper we consider the Barzilai and Borwein gradient method for the large scale unconstrained minimization problem (1)
min f (x), x∈Rn where f : Rn → R - The Barzilai and Borwein method can be incorporated in a globalization strategy that preserves the good features of the method and only requires 3n storage locations
- Since the search direction is always the negative gradient direction, it is trivial to ensure that descent directions are generated at every iteration
- This is in sharp contrast to the conjugate gradient methods, for which a very accurate line search has to be performed at every iteration to generate descent directions
- Our numerical experiments seem to indicate that the global Barzilai and Borwein algorithm is competitive and sometimes preferable to recent and well-known implementations of the conjugate gradient method
- To explain the local behavior of the Global Barzilai and Borwein (GBB) method is that the Barzilai and Borwein method, given by (2) and (3), is globally convergent for convex quadratic functions
方法
- Method IT g Time
GBB P R+
CPU time. if the Hessian is singular at the solution as in problem 11, CONMIN and P R+ clearly out perform GBB.
CONMIN and P R+ out perform GBB in number of iterations, except for problems with a well-conditioned Hessian at the solution, in which case the number of iterations is quite similar. - If the Hessian is singular at the solution as in problem 11, CONMIN and P R+ clearly out perform GBB.
- CONMIN and P R+ out perform GBB in number of iterations, except for problems with a well-conditioned Hessian at the solution, in which case the number of iterations is quite similar.
- In some of those cases, the difference in computing time is remarkable
结论
- Concluding remarks
The Barzilai and Borwein method can be incorporated in a globalization strategy that preserves the good features of the method and only requires 3n storage locations. - Since the search direction is always the negative gradient direction, it is trivial to ensure that descent directions are generated at every iteration.
- The authors' numerical experiments seem to indicate that the global Barzilai and Borwein algorithm is competitive and sometimes preferable to recent and well-known implementations of the conjugate gradient method.
- To explain the local behavior of the GBB method is that the Barzilai and Borwein method, given by (2) and (3), is globally convergent for convex quadratic functions
表格
- Table1: Test problems
- Table2: Results for GBB, CONMIN, and P R+
- Table3: Number of problems for which a method was a winner
基金
- This research was supported in part by CDCH-UCV project 03.13.0034.93 and by NSF grant CHE9301120
引用论文
- [12] More et al.
- [12] Garg and Tapia
- [12] Buckley and LeNir
- Concluding remarks. The Barzilai and Borwein method can be incorporated in a globalization strategy that preserves the good features of the method and only requires 3n storage locations. Since the search direction is always the negative gradient direction, it is trivial to ensure that descent directions are generated at every iteration. This is in sharp contrast to the conjugate gradient methods, for which a very accurate line search has to be performed at every iteration to generate descent directions.
- [1] J. Barzilai and J. M. Borwein, Two point step size gradient methods, IMA J. Numer. Anal., 8 (1988), pp. 141–148.
- [2] A. Buckley and A. LeNir, QN-like variable storage conjugate gradients, Math. Programming, 27 (1983), pp. 155–175.
- [3] J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, Englewood Cliffs, NJ, 1983.
- [4] R. Fletcher, Practical Methods of Optimization, John Wiley, New York, 1987.
- [5] R. Fletcher, Low storage methods for unconstrained optimization, in Lectures in Applied Mathematics, Vol. 26, American Mathematical Society, Providence, RI, 1990, pp. 165–179.
- [6] A. Friedlander, J. M. Martinez, and M. Raydan, A new method for large-scale box constrained convex quadratic minimization problems, Optim. Methods and Software, 5 (1995), pp. 57–74.
- [7] N. K. Garg and R. A. Tapia, QDN: A Variable Storage Algorithm for Unconstrained Optimization, Technical report, Department of Mathematical Sciences, Rice University, Houston, TX, 1977.
- [8] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), pp. 21–42.
- [9] W. Glunt, T. L. Hayden, and M. Raydan, Molecular conformations from distance matrices, J. Comput. Chem., 14 (1993), pp. 114-120.
- [10] L. Grippo, F. Lampariello, and S. Lucidi, A nonmonotone line search technique for Newton’s method, SIAM J. Numer. Anal., 23 (1986), pp. 707–716.
- [11] L. Grippo, F. Lampariello, and S. Lucidi, A class of nonmonotone stabilization methods in unconstrained optimization, Numer. Math., 59 (1991), pp. 779–805.
- [12] J. J. More, B. S. Garbow, and K. E. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Software, 7 (1981), pp. 17–41.
- [13] J. J. Moreand D. J. Thuente, On line search algorithms with guaranteed sufficient decrease, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL, 1990, preprint MCS-P153-0590.
- [14] J. Nocedal, Theory of algorithms for unconstrained optimization, Acta Numerica, 1 (1992), pp. 199–242.
- [15] J. S. Pang, S. P. Han, and N. Rangaraj, Minimization of locally lipschitzian functions, SIAM J. Optim., 1 (1991), pp. 57–82.
- [16] E. R. Panier and A. L. Tits, Avoiding the Maratos effect by means of a nonmonotone line search I. General constrained problems, SIAM J. Numer. Anal., 28 (1991), pp. 1183–1195.
- [17] M. J. D. Powell, Nonconvex minimization calculations and the conjugate gradient method, in Lecture Notes in Mathematics, Vol. 1066, Springer-Verlag, Berlin, 1984, pp. 122–141.
- [18] M. Raydan, On the Barzilai and Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), pp. 321–326.
- [19] D. F. Shanno and K. H. Phua, Remark on algorithm 500: Minimization of unconstrained multivariate functions, ACM Trans. Math. Software, 6 (1980), pp. 618–622.
- [20] Ph. L. Toint Test Problems for Partially Separable Optimization and Results for the Routine PSPMIN, Report Nr 83/4, Department of Mathematics, Facultes Universitaires de Namur, Namur, Belgium, 1983.
标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn