Open this publication in new window or tab >>Show others...
2023 (English)In: Proceedings of ALT 2023 / [ed] Shipra Agrawal, Francesco Orabona, ML Research Press , 2023, p. 663-706Conference paper, Published paper (Refereed)
Abstract [en]
To date, no “information-theoretic” frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy “surrogate” algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
Place, publisher, year, edition, pages
ML Research Press, 2023
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 201
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-328375 (URN)001227262400022 ()2-s2.0-85161238002 (Scopus ID)
Conference
34th International Conference on Algorithmic Learning Theory, ALT 2023, Singapore, 20 - 23 February 2023
Note
QC 20231204
2023-06-082023-06-082024-07-16Bibliographically approved