We investigate quasi-Newton methods for minimizing a strongly convex quadratic function which is subject to errors in the evaluation of the gradients. In particular, we focus on computing search directions for quasi-Newton methods that all give identical behavior in exact arithmetic, generating minimizers of Krylov subspaces of increasing dimensions, thereby having finite termination. The BFGS quasi-Newton method may be seen as an ideal method in exact arithmetic and is empirically known to behave very well on a quadratic problem subject to small errors. We investigate large-error scenarios, in which the expected behavior is not so clear. We consider memoryless methods that are less expensive than the BFGS method, in that they generate low-rank quasi-Newton matrices that differ from the identity by a symmetric matrix of rank two. In addition, a more advanced model for generating the search directions is proposed, based on solving a chance-constrained optimization problem. Our numerical results indicate that for large errors, such a low-rank memoryless quasi-Newton method may perform better than a BFGS method. In addition, the results indicate a potential edge by including the chance-constrained model in the memoryless quasi-Newton method.
QC 20250303