On the Convergence of Step Decay Step-Size for Stochastic Optimization
2021 (English)In: Advances in Neural Information Processing Systems, Neural information processing systems foundation , 2021, p. 14226-14238Conference paper, Published paper (Refereed)
Abstract [en]
The convergence of stochastic gradient descent is highly dependent on the step-size, especially on non-convex problems such as neural network training. Step decay step-size schedules (constant and then cut) are widely used in practice because of their excellent convergence and generalization qualities, but their theoretical properties are not yet well understood. We provide convergence results for step decay in the non-convex regime, ensuring that the gradient norm vanishes at an O(ln T /√T) rate. We also provide near-optimal (and sometimes provably tight) convergence guarantees for general, possibly non-smooth, convex and strongly convex problems. The practical efficiency of the step decay step-size is demonstrated in several large-scale deep neural network training tasks.
Place, publisher, year, edition, pages
Neural information processing systems foundation , 2021. p. 14226-14238
Keywords [en]
Gradient methods, Optimization, Stochastic systems, Convergence results, Convex problems, Generalisation, Near-optimal, Neural networks trainings, Nonconvex problem, Property, Step size, Stochastic gradient descent, Stochastic optimizations, Deep neural networks
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-316377ISI: 000922928401044Scopus ID: 2-s2.0-85123711559OAI: oai:DiVA.org:kth-316377DiVA, id: diva2:1687684
Conference
35th Conference on Neural Information Processing Systems, NeurIPS 2021, 6-14 December 2021, Virtual/Online
Note
Part of proceedings: ISBN 978-1-7138-4539-3
QC 20230921
2022-08-162022-08-162023-09-21Bibliographically approved