Open this publication in new window or tab >>2025 (English)In: Computational optimization and applications, ISSN 0926-6003, E-ISSN 1573-2894Article in journal (Refereed) Published
Abstract [en]
We investigate quasi-Newton methods for minimizing a strongly convex quadratic function which is subject to errors in the evaluation of the gradients. In particular, we focus on computing search directions for quasi-Newton methods that all give identical behavior in exact arithmetic, generating minimizers of Krylov subspaces of increasing dimensions, thereby having finite termination. The BFGS quasi-Newton method may be seen as an ideal method in exact arithmetic and is empirically known to behave very well on a quadratic problem subject to small errors. We investigate large-error scenarios, in which the expected behavior is not so clear. We consider memoryless methods that are less expensive than the BFGS method, in that they generate low-rank quasi-Newton matrices that differ from the identity by a symmetric matrix of rank two. In addition, a more advanced model for generating the search directions is proposed, based on solving a chance-constrained optimization problem. Our numerical results indicate that for large errors, such a low-rank memoryless quasi-Newton method may perform better than a BFGS method. In addition, the results indicate a potential edge by including the chance-constrained model in the memoryless quasi-Newton method.
Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Quadratic programming, Quasi-Newton method, Stochastic quasi-Newton method, Chance constrained model
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-360747 (URN)10.1007/s10589-025-00661-4 (DOI)001426490300001 ()2-s2.0-105001073066 (Scopus ID)
Note
QC 20250303
2025-03-032025-03-032025-05-27Bibliographically approved